Search Results

Search found 1686 results on 68 pages for 'andrew moffat'.

Page 59/68 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • What does @@variable mean in Ruby?

    - by Andrew
    What are Ruby variables preceded with double at signs (@@)? My understanding of a variable preceded with an at sign is that it is an instance variable, like this in PHP: PHP version class Person { public $name; public function setName($name) { $this->name = $name; } public function getName() { return $this->name; } } Ruby equivalent class Person def set_name(name) @name = name end def get_name() @name end end What does the double at sign @@ mean, and how does it differ from a single at sign?

    Read the article

  • Rails valiation among a three model relationship

    - by Andrew
    I'm working on a three model relationship with one aspect that I'm not sure how to approach. Here's the basic relationship: class Taxonomy has_many :terms # attribute: `inclusive`, default => false end class Term belongs_to :taxonomy has_and_belongs_to_many :photos end class Photo has_and_belongs_to_many :terms end This is pretty straightforward stuff except for one thing: A Taxonomy can be either 'Inclusive' or 'Exclusive'. Exclusive means the terms are mutually exclusive, Inclusive means they're not. So, if a Taxonomy is exclusive ie. taxonomy.inclusive = false, then there can only be one term from that taxonomy attached to a given photo. Now, I can handle this on the client-side without a problem, but I am not quite sure how to set up a validation on Photos (or somewhere else) that says basically: "validate that no more than one term from an exclusive taxonomy is associated with this record." Any ideas on how to do that?

    Read the article

  • Elfsign Object Signing on Solaris

    - by danx
    Elfsign Object Signing on Solaris Don't let this happen to you—use elfsign! Solaris elfsign(1) is a command that signs and verifies ELF format executables. That includes not just executable programs (such as ls or cp), but other ELF format files including libraries (such as libnvpair.so) and kernel modules (such as autofs). Elfsign has been available since Solaris 10 and ELF format files distributed with Solaris, since Solaris 10, are signed by either Sun Microsystems or its successor, Oracle Corporation. When an ELF file is signed, elfsign adds a new section the ELF file, .SUNW_signature, that contains a RSA public key signature and other information about the signer. That is, the algorithm used, algorithm OID, signer CN/OU, and time stamp. The signature section can later be verified by elfsign or other software by matching the signature in the file agains the ELF file contents (excluding the signature). ELF executable files may also be signed by a 3rd-party or by the customer. This is useful for verifying the origin and authenticity of executable files installed on a system. The 3rd-party or customer public key certificate should be installed in /etc/certs/ to allow verification by elfsign. For currently-released versions of Solaris, only cryptographic framework plugin libraries are verified by Solaris. However, all ELF files may be verified by the elfsign command at any time. Elfsign Algorithms Elfsign signatures are created by taking a digest of the ELF section contents, then signing the digest with RSA. To verify, one takes a digest of ELF file and compares with the expected digest that's computed from the signature and RSA public key. Originally elfsign took a MD5 digest of a SHA-1 digest of the ELF file sections, then signed the resulting digest with RSA. In Solaris 11.1 then Solaris 11.1 SRU 7 (5/2013), the elfsign crypto algorithms available have been expanded to keep up with evolving cryptography. The following table shows the available elfsign algorithms: Elfsign Algorithm Solaris Release Comments elfsign sign -F rsa_md5_sha1   S10, S11.0, S11.1 Default for S10. Not recommended* elfsign sign -F rsa_sha1 S11.1 Default for S11.1. Not recommended elfsign sign -F rsa_sha256 S11.1 patch SRU7+   Recommended ___ *Most or all CAs do not accept MD5 CSRs and do not issue MD5 certs due to MD5 hash collision problems. RSA Key Length. I recommend using RSA-2048 key length with elfsign is RSA-2048 as the best balance between a long expected "life time", interoperability, and performance. RSA-2048 keys have an expected lifetime through 2030 (and probably beyond). For details, see Recommendation for Key Management: Part 1: General, NIST Publication SP 800-57 part 1 (rev. 3, 7/2012, PDF), tables 2 and 4 (pp. 64, 67). Step 1: create or obtain a key and cert The first step in using elfsign is to obtain a key and cert from a public Certificate Authority (CA), or create your own self-signed key and cert. I'll briefly explain both methods. Obtaining a Certificate from a CA To obtain a cert from a CA, such as Verisign, Thawte, or Go Daddy (to name a few random examples), you create a private key and a Certificate Signing Request (CSR) file and send it to the CA, following the instructions of the CA on their website. They send back a signed public key certificate. The public key cert, along with the private key you created is used by elfsign to sign an ELF file. The public key cert is distributed with the software and is used by elfsign to verify elfsign signatures in ELF files. You need to request a RSA "Class 3 public key certificate", which is used for servers and software signing. Elfsign uses RSA and we recommend RSA-2048 keys. The private key and CSR can be generated with openssl(1) or pktool(1) on Solaris. Here's a simple example that uses pktool to generate a private RSA_2048 key and a CSR for sending to a CA: $ pktool gencsr keystore=file format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" \ outkey=MYPRIVATEKEY.key $ openssl rsa -noout -text -in MYPRIVATEKEY.key Private-Key: (2048 bit) modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 publicExponent: 65537 (0x10001) privateExponent: 26:14:fc:49:26:bc:a3:14:ee:31:5e:6b:ac:69:83: . . . [omitted for brevity] . . . 81 prime1: 00:f6:b7:52:73:bc:26:57:26:c8:11:eb:6c:dc:cb: . . . [omitted for brevity] . . . bc:91:d0:40:d6:9d:ac:b5:69 prime2: 00:da:df:3f:56:b2:18:46:e1:89:5b:6c:f1:1a:41: . . . [omitted for brevity] . . . f3:b7:48:de:c3:d9:ce:af:af exponent1: 00:b9:a2:00:11:02:ed:9a:3f:9c:e4:16:ce:c7:67: . . . [omitted for brevity] . . . 55:50:25:70:d3:ca:b9:ab:99 exponent2: 00:c8:fc:f5:57:11:98:85:8e:9a:ea:1f:f2:8f:df: . . . [omitted for brevity] . . . 23:57:0e:4d:b2:a0:12:d2:f5 coefficient: 2f:60:21:cd:dc:52:76:67:1a:d8:75:3e:7f:b0:64: . . . [omitted for brevity] . . . 06:94:56:d8:9d:5c:8e:9b $ openssl req -noout -text -in MYCSR.p10 Certificate Request: Data: Version: 2 (0x2) Subject: OU=Canine SW object signing, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 Exponent: 65537 (0x10001) Attributes: Signature Algorithm: sha1WithRSAEncryption b3:e8:30:5b:88:37:68:1c:26:6b:45:af:5e:de:ea:60:87:ea: . . . [omitted for brevity] . . . 06:f9:ed:b4 Secure storage of RSA private key. The private key needs to be protected if the key signing is used for production (as opposed to just testing). That is, protect the key to protect against unauthorized signatures by others. One method is to use a PIN-protected PKCS#11 keystore. The private key you generate should be stored in a secure manner, such as in a PKCS#11 keystore using pktool(1). Otherwise others can sign your signature. Other secure key storage mechanisms include a SCA-6000 crypto card, a USB thumb drive stored in a locked area, a dedicated server with restricted access, Oracle Key Manager (OKM), or some combination of these. I also recommend secure backup of the private key. Here's an example of generating a private key protected in the PKCS#11 keystore, and a CSR. $ pktool setpin # use if PIN not set yet Enter token passphrase: changeme Create new passphrase: Re-enter new passphrase: Passphrase changed. $ pktool gencsr keystore=pkcs11 label=MYPRIVATEKEY \ format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" $ pktool list keystore=pkcs11 Enter PIN for Sun Software PKCS#11 softtoken: Found 1 asymmetric public keys. Key #1 - RSA public key: MYPRIVATEKEY Here's another example that uses openssl instead of pktool to generate a private key and CSR: $ openssl genrsa -out cert.key 2048 $ openssl req -new -key cert.key -out MYCSR.p10 Self-Signed Cert You can use openssl or pktool to create a private key and a self-signed public key certificate. A self-signed cert is useful for development, testing, and internal use. The private key created should be stored in a secure manner, as mentioned above. The following example creates a private key, MYSELFSIGNED.key, and a public key cert, MYSELFSIGNED.pem, using pktool and displays the contents with the openssl command. $ pktool gencert keystore=file format=pem serial=0xD06F00D lifetime=20-year \ keytype=rsa hash=sha256 outcert=MYSELFSIGNED.pem outkey=MYSELFSIGNED.key \ subject="O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com" $ pktool list keystore=file objtype=cert infile=MYSELFSIGNED.pem Found 1 certificates. 1. (X.509 certificate) Filename: MYSELFSIGNED.pem ID: c8:24:59:08:2b:ae:6e:5c:bc:26:bd:ef:0a:9c:54:de:dd:0f:60:46 Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Not Before: Oct 17 23:18:00 2013 GMT Not After: Oct 12 23:18:00 2033 GMT Serial: 0xD06F00D0 Signature Algorithm: sha256WithRSAEncryption $ openssl x509 -noout -text -in MYSELFSIGNED.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3496935632 (0xd06f00d0) Signature Algorithm: sha256WithRSAEncryption Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Validity Not Before: Oct 17 23:18:00 2013 GMT Not After : Oct 12 23:18:00 2033 GMT Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 9e:39:fe:c8:44:5c:87:2c:8f:f4:24:f6:0c:9a:2f:64:84:d1: . . . [omitted for brevity] . . . 5f:78:8e:e8 $ openssl rsa -noout -text -in MYSELFSIGNED.key Private-Key: (2048 bit) modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 publicExponent: 65537 (0x10001) privateExponent: 0a:06:0f:23:e7:1b:88:62:2c:85:d3:2d:c1:e6:6e: . . . [omitted for brevity] . . . 9c:e1:e0:0a:52:77:29:4a:75:aa:02:d8:af:53:24: c1 prime1: 00:ea:12:02:bb:5a:0f:5a:d8:a9:95:b2:ba:30:15: . . . [omitted for brevity] . . . 5b:ca:9c:7c:19:48:77:1e:5d prime2: 00:cd:82:da:84:71:1d:18:52:cb:c6:4d:74:14:be: . . . [omitted for brevity] . . . 5f:db:d5:5e:47:89:a7:ef:e3 exponent1: 32:37:62:f6:a6:bf:9c:91:d6:f0:12:c3:f7:04:e9: . . . [omitted for brevity] . . . 97:3e:33:31:89:66:64:d1 exponent2: 00:88:a2:e8:90:47:f8:75:34:8f:41:50:3b:ce:93: . . . [omitted for brevity] . . . ff:74:d4:be:f3:47:45:bd:cb coefficient: 4d:7c:09:4c:34:73:c4:26:f0:58:f5:e1:45:3c:af: . . . [omitted for brevity] . . . af:01:5f:af:ad:6a:09:bf Step 2: Sign the ELF File object By now you should have your private key, and obtained, by hook or crook, a cert (either from a CA or use one you created (a self-signed cert). The next step is to sign one or more objects with your private key and cert. Here's a simple example that creates an object file, signs, verifies, and lists the contents of the ELF signature. $ echo '#include <stdio.h>\nint main(){printf("Hello\\n");}'>hello.c $ make hello cc -o hello hello.c $ elfsign verify -v -c MYSELFSIGNED.pem -e hello elfsign: no signature found in hello. $ elfsign sign -F rsa_sha256 -v -k MYSELFSIGNED.key -c MYSELFSIGNED.pem -e hello elfsign: hello signed successfully. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. $ elfsign list -f format -e hello rsa_sha256 $ elfsign list -f signer -e hello O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com $ elfsign list -f time -e hello October 17, 2013 04:22:49 PM PDT $ elfsign verify -v -c MYSELFSIGNED.key -e hello elfsign: verification of hello failed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. Signing using the pkcs11 keystore To sign the ELF file using a private key in the secure pkcs11 keystore, replace "-K MYSELFSIGNED.key" in the "elfsign sign" command line with "-T MYPRIVATEKEY", where MYPRIVATKEY is the pkcs11 token label. Step 3: Install the cert and test on another system Just signing the object isn't enough. You need to copy or install the cert and the signed ELF file(s) on another system to test that the signature is OK. Your public key cert should be installed in /etc/certs. Use elfsign verify to verify the signature. Elfsign verify checks each cert in /etc/certs until it finds one that matches the elfsign signature in the file. If one isn't found, the verification fails. Here's an example: $ su Password: # rm /etc/certs/MYSELFSIGNED.key # cp MYSELFSIGNED.pem /etc/certs # exit $ elfsign verify -v hello elfsign: verification of hello passed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:24:20 PM PDT. After testing, package your cert along with your ELF object to allow elfsign verification after your cert and object are installed or copied. Under the Hood: elfsign verification Here's the steps taken to verify a ELF file signed with elfsign. The steps to sign the file are similar except the private key exponent is used instead of the public key exponent and the .SUNW_signature section is written to the ELF file instead of being read from the file. Generate a digest (SHA-256) of the ELF file sections. This digest uses all ELF sections loaded in memory, but excludes the ELF header, the .SUNW_signature section, and the symbol table Extract the RSA signature (RSA-2048) from the .SUNW_signature section Extract the RSA public key modulus and public key exponent (65537) from the public key cert Calculate the expected digest as follows:     signaturepublicKeyExponent % publicKeyModulus Strip the PKCS#1 padding (most significant bytes) from the above. The padding is 0x00, 0x01, 0xff, 0xff, . . ., 0xff, 0x00. If the actual digest == expected digest, the ELF file is verified (OK). Further Information elfsign(1), pktool(1), and openssl(1) man pages. "Signed Solaris 10 Binaries?" blog by Darren Moffat (2005) shows how to use elfsign. "Simple CLI based CA on Solaris" blog by Darren Moffat (2008) shows how to set up a simple CA for use with self-signed certificates. "How to Create a Certificate by Using the pktool gencert Command" System Administration Guide: Security Services (available at docs.oracle.com)

    Read the article

  • Un balance del XXI Congreso de la Comunidad de Usuarios de Oracle

    - by Fabian Gradolph
    La XXI edición del Congreso de CUORE (Comunidad de Usuarios de Oracle) se clausuró el miércoles pasado tras dos intensos días de conferencias, talleres, reuniones y mesas redondas. Los más de 600 asistentes son una buena muestra del gran interés que despiertan las propuestas tecnológicas de Oracle entre nuestros clientes. Big Data y el sector utilities fueron dos de los grandes protagonistas del Congreso. El evento fue inaugurado por Félix del Barrio (en la segunda foto por la izquierda), director general de Oracle en España. Una buena parte del evento, la mañana del martes, estuvo dedicada a Big Data. Con Andrew Sutherland, Vicepresidente Senior de Tecnología de Oracle en EMEA, haciendo la presentación principal, para dar paso después a sesiones específicas sobre las tecnologías necesarias en las diferentes fases de los proyectos Big Data (obtener los datos, organizarlos, analizarlos y, finalmente, tomar las decisiones de negocio correspondientes). No nos vamos a entretener explicando qué es Big Data, un tema que ya hemos tratado previamente en este blog (aquí y aquí), pero sí hay que llamar la atención sobre un tema que Andrew Sutherland puso sobre la mesa en una reunión con periodistas: los proyectos relacionados con los Big Data tienen sentido pleno si nos sirven para modificar procesos y modelos de negocio, de forma que incrementemos la eficacia de la organización. Si nuestra organización está basada en procesos rígidos e inmutables (lo que tiene que ver esencialmente con el tipo de aplicaciones que estén implementadas), el aprovechamiento de los Big Data será limitado. En otras palabras, Big Data es un impulsor del cambio en las organizaciones. Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Los retos a los que se enfrenta un sector como el energético ocuparon el segundo día del Congreso. Las tendencias de la industria como las Redes Inteligentes, el Smart Metering, la entrada de nuevos actores y distribuidores en el mercado, la atomización de las operadoras y las inversiones congeladas son el panorama que se dibuja para las compañías del sector utilities . Además de los grandes eventos (Big Data y Oracle Utilities Day), las dos jornadas del Congreso sirvieron para que aquellos partners de Oracle que lo desearan recibieran la certificación gratuita de sus profesionales en diversas jornadas de examen. Adicionalmente, se desarrollaron sesiones paralelas sobre tecnologías y visiones estratégicas, demostraciones de producto y casos de éxito. En resumen, el balance del XXI Congreso de CUORE es muy positivo para Oracle, para nuestros clientes y para nuestros partners. Os esperamos a todos el próximo año.

    Read the article

  • Welcome to the SOA &amp; E2.0 Partner Community Forum

    - by Jürgen Kress
    With more than 200 registrations the SOA & E2.0 Partner Community Forum is a huge success!   Conference program Is available online: http://tinyurl.com/soaforumagenda Agenda Tuesday March 15th 2011 12:15 Welcome & Introduction – Hans Blaas & Jürgen Kress, Oracle 12:30 Oracle Middleware Strategy and Information on Application Grid and Exalogic - Andrew Sutherland, Oracle 13:15 Managing Online Customer, Partner and Employee Engagement Oracle E2.0 Solutions - Andrew Gilboy, Oracle 14:00 Coffee Break 14:30 Partner SOA/ BPM Reference Case – Leon Smiers, Capgemini 15:15 Partner WebCenter/ UCM Reference Case – Vikram Setia, Infomentum 16.00 Break 16.30 SOA and BPM 11gR1 PS3 Update – David Shaffer 17:00 Why specialization is important for Partners – Nick Kritikos, Hans Blaas & Jürgen Kress 17:45 Social Event   Wednesday March 16th 2011 09.00 Welcome & Introduction Day II 09.15 Breakout sessions Round 1 SOA Suite 11g PS3 & OSB Importance of ADF & Jdeveloper SOA Security IDM WebCenter PS3, Whats New E2.0 Sales Plays 10.30 Break 10.45 Breakout sessions Round 2 WebCenter PS3, Whats New Applications Management Enterprise Manager and Amberpoint ADF/WebCenter 11g integration with BPM Suite 11g Importance of ADF & Jdeveloper JCAPS & OC4J migration opportunities for service business 12.00 Lunch 13.00 Breakout sessions Round 3 BPM 11g, Whats New Universal Content Management! 11g SOA Security IDM E2.0 Surrounding Products: ATG, Documaker, Primavera Middleware Industry Value Propositions & Sales Plays 14.30 Break 14.45 Fusion Applications, Rajan Krishnan, Oracle 15.30 SOA & E2.0 Summary & Closing, Hans Blaas & Jürgen Kress, Oracle 15.45 Finish & Departure 16:00 Bus departure   Capgemini Nederland BV Papendorpseweg 100 3500 GN Utrecht The Netherlands Tel: +31 30 689 00 00 For a detailed routedescription by car or public transport please visit: http://www.nl.capgemini.com/pdf/Papendorp_UK.pdf Hotel In case you have not booked your hotel yet, please make your own hotel reservation. You can book your hotel room at the 'Hotel Vianen' at a special rate, by using the Oracle booking code: DDG VIA-GF41422. One night package € 110,- for a single room, including breakfast. Kindly secure your hotel room as soon as possible. The number of rooms is limited! Hotel Vianen Prins Bernhardstraat 75 4132 XE Vianen [email protected] The Netherlands [email protected] Arrival on 14th of March and staying at Hotel Vianen. On 15th of March we have arranged a transfer from Hotel Vianen to the Capgemini Offices. The bus is parked in front of the hotel and will leave at 10.15AM (UTC/GMT+1). Logistics Pass with barcode At your arrival you will receive a pass with a barcode. This pass will give you access to the conference building and the different floors within the building. Please make sure to hand in your pass at the registration desk at the end of the day. Arrival by plane Transfer from Schiphol Airport to Capgemini on 15th of March will be arranged by Oracle. A hostess will be welcoming you at the Meeting Point at Schiphol Airport (this is a red and white large cubicle situated next to Delifrance) The buses will depart from Schiphol Airport at 09.00AM, 09.45AM and 10.30AM (UTC/GMT+1).     For future SOA Partner Community Forums  become a member for registration please visit www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Partner Community Forum,Community,SOA Partner Community,Utrecht 03.2011,OPN,Oracle,Jürgen Kress

    Read the article

  • The Definitive C++ Book Guide and List

    - by grepsedawk
    After more than a few questions about deciding on C++ books I thought we could make a better community wiki version. Providing QUALITY books and an approximate skill level. Maybe we can add a short blurb/description about each book that you have personally read / benefited from. Feel free to debate quality, headings, etc. Note: There is a similar post for C: The Definitive C Book Guide and List Reference Style - All Levels The C++ Programming Language - Bjarne Stroustrup C++ Standard Library Tutorial and Reference - Nicolai Josuttis Beginner Introductory: C++ Primer - Stanley Lippman / Josée Lajoie / Barbara E. Moo Accelerated C++ - Andrew Koenig / Barbara Moo Thinking in C++ - Bruce Eckel (2 volumes, 2nd is more about standard library, but still very good) Best practices: Effective C++ - Scott Meyers Effective STL - Scott Meyers Intermediate More Effective C++ - Scott Meyers Exceptional C++ - Herb Sutter More Exceptional C++ - Herb Sutter C++ Coding Standards: 101 Rules, Guidelines, and Best Practices - Herb Sutter / Andrei Alexandrescu C++ Templates The Complete Guide - David Vandevoorde / Nicolai M. Josuttis Large Scale C++ Software Design - John Lakos Above Intermediate Modern C++ Design - Andrei Alexandrescu C++ Template Metaprogramming - David Abrahams and Aleksey Gurtovoy Inside the C++ Object Model - Stanley Lippman Classics / Older Note: Some information contained within these books may not be up to date and no longer considered best practice. The Design and Evolution of C++ - Bjarne Stroustrup Ruminations on C++ Andrew Koenig / Barbara Moo Advanced C++ Programming Styles and Idioms - James Coplien

    Read the article

  • NoRM MongoConfiguration

    - by user365836
    Something is wrong with my MongoConfiguration I think, I read the following on NoRM's google group: On Mar 22, 3:14 am, Andrew Theken wrote: It can easily live in the object: public class POCO{ //implicitly runs one time. static POCO(){   MongoConfiguration.Initialize(cfg= cfg.For<POCO>(f=     {        f.ForProperty(p=p.ALongProperty).UseAlias("A");        f.ForProperty(p=p.AnotherLongProperty).UseAlias("B");        //etc.     }    }    public string ALongProperty {get;set;}    public int AnotherLongProperty{get;set;} } ;-) //Andrew Theken In my class I have public class Item { static Item() { MongoConfiguration.Initialize( cfg=> cfg.For<Item>( c => { c.ForProperty(i => i.Name).UseAlias("N"); c.ForProperty(i => i.Description).UseAlias("D"); }) ); } public String Name { get; set; } public String Description { get; set; } } After I save them there are items in DB with short key namse, the problem is when I try to read up all previously created items on start I get no items back.

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • Problems with my JS array undefined x 7

    - by Dave
    I have an array im trying to loop through to create a new type of array specific to my current page. My array looks like this: //$_SESSION['data'] = Array ( [0] => 1 [1] => 0 [2] => Tom [8] => 1 [4] => 1 [5] => Array ( [7] => Array ( [0] => Andrew [1] => 1 [2] => 1 [4] => 0 [5] => avatar.jpg [6] => 1 ) ) [6] => Array ( [0] => 1 [1] => 2 ) ) So in my JS file i have this: var stats = <? echo $_SESSION['data'][5]); ?> ; //this is the array my_data = new Array(); for(var key in stats){ if(key in my_data){} else { //prevent double entry my_data[key] = new Array(); my_data[key][0] = stats[key][6]; my_data[key][1] = stats[key][5]; my_data[key][2] = stats[key][2]; my_data[key][3] = stats[key][0]; } } console.log(my_data); Now in console.log i get this : [undefined × 7, Array[4] 0: "1" 1: "avatar.jpg" 2: "1" 3: "Andrew" length: 4 __proto__: Array[0] ] I'm wondering why it is saying undefined x7?

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Xubuntu login hangs after Cancel Button click

    - by akester
    I'm running Xubuntu 12.04 (I installed using the alternative installer.) running in Virtaulbox 4.1.20 My issue is with the login screen (lightdm-gtk-greeter). It usually runs just fine, and allows users to log in and out but it will hang if the user presses the cancel button. The interface is still working (ie, shutdown menu is still available, I can switch to a different tty) but the username or password field (depending on when the button is hit) is disabled. Restarting lightdm will reset the screen, but the problem still exists. The issue is only with the cancel button. The login, session, and language buttons/menus as well as the accessibility and shutdown menu appear to work normally. I've modified some of the config files for lighdm-gtk-greeter, specifically /etc/lightdm/lighdm-gtk-greeter.conf to change the background image and /etc/lightdm/lightdm.conf to disable the user list. I did not check to see if the error existed before the changes took place. The changes have been restored the default settings but the problem persists. Here is the output of /var/log/lightdm/lightdm.log when the screen is hung: [+0.00s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=2072 [+0.00s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: Registered seat module xlocal [+0.00s] DEBUG: Registered seat module xremote [+0.00s] DEBUG: Adding default seat [+0.00s] DEBUG: Starting seat [+0.00s] DEBUG: Starting new display for greeter [+0.00s] DEBUG: Starting local X display [+0.02s] DEBUG: Using VT 7 [+0.02s] DEBUG: Activating VT 7 [+0.03s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.04s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.04s] DEBUG: Launching X Server [+0.05s] DEBUG: Launching process 2078: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+0.05s] DEBUG: Waiting for ready signal from X server :0 [+0.05s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.05s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+0.28s] DEBUG: Got signal 10 from process 2078 [+0.28s] DEBUG: Got signal from X server :0 [+0.28s] DEBUG: Connecting to XServer :0 [+0.29s] DEBUG: Starting greeter [+0.29s] DEBUG: Started session 2082 with service 'lightdm', username 'lightdm' [+0.36s] DEBUG: Session 2082 authentication complete with return value 0: Success [+0.36s] DEBUG: Greeter authorized [+0.36s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+0.36s] DEBUG: Session 2082 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/lightdm-gtk-greeter [+0.58s] DEBUG: Greeter connected version=1.2.1 [+0.58s] DEBUG: Greeter connected, display is ready [+0.58s] DEBUG: New display ready, switching to it [+0.58s] DEBUG: Activating VT 7 [+1.04s] DEBUG: Greeter start authentication for andrew [+1.04s] DEBUG: Started session 2137 with service 'lightdm', username 'andrew' [+1.09s] DEBUG: Session 2137 got 1 message(s) from PAM [+1.09s] DEBUG: Prompt greeter with 1 message(s) [+17.24s] DEBUG: Cancel authentication [+17.24s] DEBUG: Session 2137: Sending SIGTERM

    Read the article

  • Should I install software on a "SAN"

    - by am2605
    Hi, I need to set up ColdFusion 9 on a ubuntu server that has a SAN disk mounted. Is it appropriate to install the CF server software on this disk? I don't really understand the ins and outs of what a SAN is, so I am not sure if the intention is for me to solely install web content on it or whether the server software itself should go here too. Any advice would be extremeness welcome. Many thanks, Andrew.

    Read the article

  • 403 Forbidden Error when trying to view localhost on Apache

    - by misbehavens
    I think my Apache must be all screwed up. I don't know if it ever worked. I just upgraded to Snow Leopard, and the first step on this tutorial is to start apache and check that it's working by opening http://localhost. It starts fine but when I go to localhost I get a 403 forbidden error. I don't know where to start figuring out how to fix it, so I wonder if a fresh install of Apache would do the trick. What do you think? Update: I found some error logs in /private/var/log/apache2/. Found this in one of the logs. Not sure what it means: [Tue Nov 10 17:53:08 2009] [notice] caught SIGTERM, shutting down [Tue Nov 10 21:49:17 2009] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] Warning: DocumentRoot [/usr/docs/dummy-host.example.com] does not exist Warning: DocumentRoot [/usr/docs/dummy-host2.example.com] does not exist httpd: Could not reliably determine the server's fully qualified domain name, using Andrews-Mac-Pro.local for ServerName mod_bonjour: Skipping user 'andrew' - cannot read index file '/Users/andrew/Sites/index.html'. [Tue Nov 10 21:49:19 2009] [notice] Digest: generating secret for digest authentication ... [Tue Nov 10 21:49:19 2009] [notice] Digest: done [Tue Nov 10 21:49:19 2009] [notice] Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 PHP/5.3.0 configured -- resuming normal operations Update: I also found something in the dummy-host.example.com-error_log file. I didn't set these dummy-host things by the way. Is this the default configuration? [Tue Nov 10 21:59:57 2009] [error] [client ::1] client denied by server configuration: /usr/docs Update: Woohoo! I found the file that had the virtual host definitions. It was in /etc/apache2/extra/httpd-vhosts.conf. It had those two dummy virtual host settings in there. I added a localhost virtual host. Not sure if this is necessary, but since it wasn't working before, decided to do it anyway. After removing the old virtual hosts, adding my new localhost virtual host, and restarting apache, it seems to work. So I guess whenever I want to add a virtual host, I only need to add them to this file? Or is there a hosts file somewhere, like there is on Linux? Update: Yes, there is an /etc/hosts file that need to be changed to. Add the virtual host name to that file.

    Read the article

  • Where does $PATH get set in OS X 10.6 Snow Leopard?

    - by misbehavens
    I type echo $PATH on the command line and get /opt/local/bin:/opt/local/sbin:/Users/andrew/bin:/usr/local/bin:/usr/local/mysql/bin:/usr/local/pear/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/opt/local/bin:/usr/local/git/bin I'm wondering where this is getting set since my .bash_login file is empty. I'm particularly concerned that, after installing MacPorts, it installed a bunch of junk in /opt. I don't think that directory even exists in a normal Mac OS X install. Update: Thanks to jtimberman for correcting my echo $PATH statement

    Read the article

  • run growlnotify over ssh

    - by CCG121
    I am trying to run growl notify over ssh like so ssh [email protected] "growlnotify -m test" is run I get bash: growlnotify command not found however running it straight from the mac it runs fine am I missing something simple or is there a really complex reason this won't work? ps. ssh keys are enabled in both directions edit: I logged into the mac from a remote machine over ssh and tried to run it and it ran fine so it seems to just affect the one line login and run way and DrC tried that cated the .profile to .bashrc

    Read the article

  • eSTEP Newsletter December 2012

    - by uwes
    Dear Partners,We would like to inform you that the December issue of our Newsletter is now available.The issue contains informations to the following topics: Notes from Corporate: It's Earth day - Every Day, Oracle SPARC Newsletter, Pre-Built Developer VMs (for Oracle VM VirtualBox), Oracle Database Appliance Now Certified by SAP, Database High Availability, Cultivating Business-Led Innovation Technical Corner: Geek Fest! Talking About the Design of the T4 and T5 SPARC Chips, Blog: Is This Your Idea of Disaster Recovery?; Oracle® Practitioner Guide - A Pragmatic Approach to Cloud Adoption; Oracle Practitioner Guide: A pragmatic Approach to Cloud Adoption; Darren Moffat Explains the new ZFS Encryption Features in Solaris 11.1; Command Summary: Basic Operations with the Image Packaging System; SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; SPARC T4-4 Servers Set First World Record on PeopleSoft HCM 9.1 Benchmark; Sun ZFS Appliance Monitor Refresh: Core Factor Table; Remanufactured Systems Program for Sun Systems from Oracle; Reminder: Oracle Premier Support for Systems; Reminder: Oracle Platinum Services Learning & Events: eSTEP Events Schedule; Recently Delivered Techcasts; Webinar: Maximum Availibility with Oracle GoldenGate References: LUKOIL Overseas Holding Optimizes Oil Field Development Projects with Integrated Project Management; United Networks Increases Accounting Flexibility and Boosts System Performance with ERP Applications Upgrade; Ziggo Rapidly Creates Applications That Accelerate Communications-Service Orders l How to ...: The Role of Oracle Solaris Zones and Oracle Linux Containers in a Virtualization Strategy; How to Update to Oracle Solaris 11.1; Using svcbundle to Create Manifests and Profiles in Oracle Solaris 11.1; How to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; How to Script Oracle Solaris 11.1 Zones for Easy Cloning; How to Script Oracle Solaris 11 Zones Creation for a Network-in-a-Box Configuration; How to Know Whether T4 Crypto Accelerators Are in Use; Fault Handling and Prevention – Part 1; Transforming and Consolidating Web Data with Oracle Database; Looking Under the Hood at Networking in Oracle VM Server for x86; Best Way to Migrate Data from Legacy File System to ZFS in Oracle Solaris 11; Special Year End Article: The Top 10 Strategic CIO Issues For 2013 You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Oracle for PCI-DSS Security Webcast

    - by Alex Blyth
    Thanks to everyone who attended the Oracle for PCI-DSS security webcast today. It was good to see how the products we talked about last week can be used to address the PCI standard requirements. A big thanks to Chris Pickett for presenting a great session and running us through a very cool demo showing how the data is protected through out its life. The replay of the session can be downloaded here. Slides and be down loaded here. Oracle for PCI-DSS Security Compliance View more presentationsfrom Oracle Australia. Next week we resume our regular schedule with Andrew Clarke taking us through Oracle Application Express (APEX) - one of the best kept secrets in the Oracle Database. Enroll for this session here (and now :) ) Till next week Cheers Alex

    Read the article

  • Oracle : « Le Cloud reprend le meilleur des Mainframes » et en corrige les défauts, à condition qu'il s'appuie sur des standards ouverts

    Oracle : « Le Cloud reprend le meilleur des Mainframes » Et en corrige les défauts, à condition qu'il s'appuie sur des standards ouverts Il y a environ 7 ans, Oracle a entamé un virage stratégique. Son but était de simplifier les déploiements et les architectures IT. Aujourd'hui, l'éditeur aux multiples casquettes (BI, BPM, Hardware, SGBD, Java, etc.) est en train d'en faire un deuxième. Celui du Cloud . Et toujours sous le signe de la simplification. « Le meilleur Cloud sera complètement transparent pour les utilisateurs », prédit Andrew Sutherland, le cordial (et écossais) Senior Vice-Président Fusion Middleware Europe, de passage ce matin à Paris. Sous-entendu, t...

    Read the article

  • Learn how Oracle storage efficiencies can help your budget

    - by jenny.gelhausen
    Mark Your Calendar! Live Webcast: Next Generation Storage Management Solutions Wednesday, March 24th, 2010 at 9:00am PT or your local time Please plan to join us for this webcast where Forrester senior analyst Andrew Reichman will discuss the pillars of storage efficiency, how to measure and improve it, and how this can help your business immediately alleviate budget pressures. Joining Mr. Reichman are Phil Stephenson, Senior Principal Product Manager at Oracle, and Matthew Baier, Oracle Product Director, who will explain to you the next generation storage capabilities available in Oracle Database 11g and Oracle Exadata. Register for this March 24th live wecast today! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Smarter, Faster, Cheaper: The Insurance Industry’s Dream

    - by Jenna Danko
    On June 3rd, I saw the Gaylord Resort Centre in Washington D.C. become the hub of C level executives and managers of insurance carriers for the IASA 2013 Conference.  Insurance Accounting/Regulation and Technology sessions took the focus, but there were plenty of tertiary sessions for career development, which complemented the overall strong networking side of the conference.  As an exhibitor, Oracle, along with several hundred other product providers, welcomed the opportunity to display and demonstrate our solutions and we were encouraged by hustle and bustle of the exhibition floor.  The IASA organizers had pre-arranged fast track tours whereby interested conference delegates could sign up for a series of like-themed presentations from Vendors, giving them a level of 'Speed Dating' introductions to possible solutions and services.  Oracle participated in a number of these, which were very well subscribed.  Clearly, the conference had a strong business focus; however, attendees saw technology as a key enabler to get their processes done smarter, faster and cheaper.  As we navigated through the exhibition, it became clear from the inquiries that came to us that insurance carriers are gravitating to a number of focus areas: Navigating the maze of upcoming regulatory reporting changes. For US carriers with European holdings, Solvency II carries a myriad of rules and reporting requirements. Alignment across the globe of the Own Risk and Solvency Assessment (ORSA) processes brings to the fore the National Insurance of Insurance commissioners' (NAIC) recent guidance manual publication. Doing more with less and to certainly expect more from technology for less dollars. The overall cost of IT, in particular hardware, has dropped in real terms (though the appetite for more has risen: more CPU, more RAM, more storage), but software has seen less change. Clearly, customers expect either to pay less or get a lot more from their software solutions for the same buck. Doing things smarter – A recognition that with the advance of technology to stand still no longer means you are technically going backwards. Technology and, in particular technology interactions with human business processes, has undergone incredible change over the past 5 years. Consumer usage (iPhones, etc.) has been at the forefront, but now at the Enterprise level ever more effective technology exploitation is beginning to take place. That data and, in particular gleaning knowledge from data, is refining and improving business processes.  Organizations are now consuming more data than ever before, and it is set to grow exponentially for some time to come.  Amassing large volumes of data is one thing, but effectively analyzing that data is another.  It is the results of such analysis that leads to improvements both in terms of insurance product offerings and the processes to support them. Regulatory Compliance, damned if you do and damned if you don’t! Clearly, around the globe at lot is changing from a regulatory perspective and it is evident that in terms of regulatory requirements, whilst there is a greater convergence across jurisdictions bringing uniformity, there is also a lot of work to be done in the next 5 years. Just like the big data, hidden behind effective regulatory compliance there often lies golden nuggets that can give competitive advantages. From Oracle's perspective, our Rating Engine, Billing, Document Management and Insurance Analytics solutions on display served to strike up good conversations and, as is always the case at conferences, it was a great opportunity to meet and speak with existing Oracle customers that we might not have otherwise caught up with for a while. Fortunately, I was able to catch up on a few sessions at the close of the Exhibition.  The speaker quality was high and the audience asked challenging, but pertinent, questions.  During Dr. Jackie Freiberg’s keynote “Bye Bye Business as Usual,” the author discussed 8 strategies to help leaders create a culture where teams consistently deliver innovative ideas by disrupting the status quo.  The very first strategy: Get wired for innovation.  Freiberg admitted that folks in the insurance and financial services industry understand and know innovation is important, but oftentimes they are slow adopters.  Today, technology and innovation go hand in hand. In speaking to delegates during and after the conference, a high degree of satisfaction could be measured from their positive comments of speaker sessions and the exhibitors. I suspect many will be back in 2014 with Indianapolis as the conference location. Did you attend the IASA Conference in Washington D.C.?  If so, I would love to hear your comments. Andrew Collins is the Director, Solvency II of Oracle Financial Services. He can be reached at andrew.collins AT oracle.com.

    Read the article

  • It's Official, I'm a Geek

    - by andyleonard
    I'm honored to join Glen Gordon ( Blog - @glengordon ) and G. Andrew Duthie ( Blog - @devhammer ) today at 3:00 PM EDT for an MSDN Webcast entitled GeekSpeak: Inside SQL Server Integration Services (SSIS). This is a LiveMeeting and you can join in the fun as an attendee here . It's a live show, so bring your questions! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Speaking at SQLRelay. Will you be there?

    - by jamiet
    SQL Relay (#sqlrelay) is fast approaching and I wanted to take this opportunity to tell you a little about it.SQL Relay is a 5-day tour around the UK that is taking in five Server Server user groups, each one comprising a full day of SQL Server related learnings. The dates and venues are:21st May, Edinburgh22nd May, Manchester23rd May, Birmingham24th May, Bristol30th May, LondonClick on the appropriate link to see the full agenda and to book your spot.SQL Relay features some of this country's most prominent SQL Server speakers including Chris Webb, Tony Rogerson, Andrew Fryer, Martin Bell, Allan Mitchell, Steve Shaw, Gordon Meyer, Satya Jayanty, Chris Testa O'Neill, Duncan Sutcliffe, Rob Carrol, me and SQL Server UK Product Manager Morris Novello so I really encourage you to go - you have my word it'll be an informative and, more importantly, enjoyable day out from your regular 9-to-5.I am presenting my session "A Lap Around the SSIS Catalog" at Edinburgh and Manchester so if you're going, I hope to see you there.@Jamiet

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >