Search Results

Search found 14311 results on 573 pages for 'stan note'.

Page 18/573 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Troubleshoot Perl module installation on Mac OS X

    - by Daniel Standage
    I'm trying to install the Perl module Set::IntervalTree on Mac OS X. I recently installed it today on an Ubuntu box with no problem. I simply started cpan, entered install Set:IntervalTree, and it all worked out. However, the installation failed on Mac OS X--it spits out a huge list of compiler errors (below). How would I troubleshoot this. I don't even know where to begin. cpan[1]> install Set::IntervalTree CPAN: Storable loaded ok (v2.18) Going to read /Users/standage/.cpan/Metadata Database was generated on Fri, 14 Jan 2011 02:58:42 GMT CPAN: YAML loaded ok (v0.72) Going to read /Users/standage/.cpan/build/ ............................................................................DONE Found 1 old build, restored the state of 1 Running install for module 'Set::IntervalTree' Running make for B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz CPAN: Digest::SHA loaded ok (v5.45) CPAN: Compress::Zlib loaded ok (v2.008) Checksum for /Users/standage/.cpan/sources/authors/id/B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz ok Scanning cache /Users/standage/.cpan/build for sizes ............................................................................DONE x Set-IntervalTree-0.01/ x Set-IntervalTree-0.01/src/ x Set-IntervalTree-0.01/src/Makefile x Set-IntervalTree-0.01/src/interval_tree.h x Set-IntervalTree-0.01/src/test_main.cc x Set-IntervalTree-0.01/lib/ x Set-IntervalTree-0.01/lib/Set/ x Set-IntervalTree-0.01/lib/Set/IntervalTree.pm x Set-IntervalTree-0.01/Changes x Set-IntervalTree-0.01/MANIFEST x Set-IntervalTree-0.01/t/ x Set-IntervalTree-0.01/t/Set-IntervalTree.t x Set-IntervalTree-0.01/typemap x Set-IntervalTree-0.01/perlobject.map x Set-IntervalTree-0.01/IntervalTree.xs x Set-IntervalTree-0.01/Makefile.PL x Set-IntervalTree-0.01/README x Set-IntervalTree-0.01/META.yml CPAN: File::Temp loaded ok (v0.18) CPAN.pm: Going to build B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz Checking if your kit is complete... Looks good Writing Makefile for Set::IntervalTree cp lib/Set/IntervalTree.pm blib/lib/Set/IntervalTree.pm AutoSplitting blib/lib/Set/IntervalTree.pm (blib/lib/auto/Set/IntervalTree) /usr/bin/perl /System/Library/Perl/5.10.0/ExtUtils/xsubpp -C++ -typemap /System/Library/Perl/5.10.0/ExtUtils/typemap -typemap perlobject.map -typemap typemap IntervalTree.xs > IntervalTree.xsc && mv IntervalTree.xsc IntervalTree.c g++ -c -Isrc -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -g -O0 -DVERSION=\"0.01\" -DXS_VERSION=\"0.01\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" -Isrc IntervalTree.c In file included from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/bits/locale_facets.h:4420:40: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/bits/locale_facets.h:4467:34: error: macro "do_close" requires 2 arguments, but only 1 given /usr/include/c++/4.2.1/bits/locale_facets.h:4486:55: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/bits/locale_facets.h:4513:23: error: macro "do_close" requires 2 arguments, but only 1 given In file included from /usr/include/c++/4.2.1/bits/locale_facets.h:4599, from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:58:38: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:67:71: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:78:39: error: macro "do_close" requires 2 arguments, but only 1 given In file included from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/bits/locale_facets.h:4486: error: ‘do_open’ declared as a ‘virtual’ field /usr/include/c++/4.2.1/bits/locale_facets.h:4486: error: expected ‘;’ before ‘const’ /usr/include/c++/4.2.1/bits/locale_facets.h:4513: error: variable or field ‘do_close’ declared void /usr/include/c++/4.2.1/bits/locale_facets.h:4513: error: expected ‘;’ before ‘const’ In file included from /usr/include/c++/4.2.1/bits/locale_facets.h:4599, from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:67: error: expected initializer before ‘const’ /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:78: error: expected initializer before ‘const’ In file included from IntervalTree.xs:19: src/interval_tree.h:95: error: type/value mismatch at argument 1 in template parameter list for ‘template<class _Tp, class _Alloc> class std::vector’ src/interval_tree.h:95: error: expected a type, got ‘IntervalTree<T,N>::it_recursion_node’ src/interval_tree.h:95: error: template argument 2 is invalid src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree()’: src/interval_tree.h:130: error: expected type-specifier src/interval_tree.h:130: error: expected `;' src/interval_tree.h:135: error: expected type-specifier src/interval_tree.h:135: error: expected `;' src/interval_tree.h:141: error: request for member ‘push_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h: In member function ‘void IntervalTree<T, N>::LeftRotate(IntervalTree<T, N>::Node*)’: src/interval_tree.h:178: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘void IntervalTree<T, N>::RightRotate(IntervalTree<T, N>::Node*)’: src/interval_tree.h:240: error: ‘x’ was not declared in this scope src/interval_tree.h: In member function ‘void IntervalTree<T, N>::TreeInsertHelp(IntervalTree<T, N>::Node*)’: src/interval_tree.h:298: error: ‘x’ was not declared in this scope src/interval_tree.h:299: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N)’: src/interval_tree.h:375: error: ‘y’ was not declared in this scope src/interval_tree.h:376: error: ‘x’ was not declared in this scope src/interval_tree.h:377: error: ‘newNode’ was not declared in this scope src/interval_tree.h:379: error: expected type-specifier src/interval_tree.h:379: error: expected `;' src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::GetSuccessorOf(IntervalTree<T, N>::Node*) const’: src/interval_tree.h:450: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::GetPredecessorOf(IntervalTree<T, N>::Node*) const’: src/interval_tree.h:483: error: ‘y’ was not declared in this scope src/interval_tree.h: In destructor ‘IntervalTree<T, N>::~IntervalTree()’: src/interval_tree.h:546: error: ‘x’ was not declared in this scope src/interval_tree.h:547: error: type/value mismatch at argument 1 in template parameter list for ‘template<class _Tp, class _Alloc> class std::vector’ src/interval_tree.h:547: error: expected a type, got ‘(IntervalTree<T,N>::Node * <expression error>)’ src/interval_tree.h:547: error: template argument 2 is invalid src/interval_tree.h:547: error: invalid type in declaration before ‘;’ token src/interval_tree.h:551: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:554: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:557: error: request for member ‘empty’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:558: error: request for member ‘back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:559: error: request for member ‘pop_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:561: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:564: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h: In member function ‘void IntervalTree<T, N>::DeleteFixUp(IntervalTree<T, N>::Node*)’: src/interval_tree.h:613: error: ‘w’ was not declared in this scope src/interval_tree.h:614: error: ‘rootLeft’ was not declared in this scope src/interval_tree.h: In member function ‘T IntervalTree<T, N>::remove(IntervalTree<T, N>::Node*)’: src/interval_tree.h:697: error: ‘y’ was not declared in this scope src/interval_tree.h:698: error: ‘x’ was not declared in this scope src/interval_tree.h: In member function ‘std::vector<T, std::allocator<_CharT> > IntervalTree<T, N>::fetch(N, N)’: src/interval_tree.h:819: error: ‘x’ was not declared in this scope src/interval_tree.h:833: error: invalid types ‘int[size_t]’ for array subscript src/interval_tree.h:836: error: request for member ‘push_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:837: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:838: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:839: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:840: error: request for member ‘size’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:846: error: request for member ‘size’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:847: error: expected `;' before ‘back’ src/interval_tree.h:848: error: request for member ‘pop_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:850: error: ‘back’ was not declared in this scope src/interval_tree.h:853: error: invalid types ‘int[size_t]’ for array subscript IntervalTree.c: In function ‘void boot_Set__IntervalTree(PerlInterpreter*, CV*)’: IntervalTree.c:365: warning: deprecated conversion from string constant to ‘char*’ src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:67: instantiated from here src/interval_tree.h:130: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h:135: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... src/interval_tree.h:848: error: request for member ‘pop_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:850: error: ‘back’ was not declared in this scope src/interval_tree.h:853: error: invalid types ‘int[size_t]’ for array subscript IntervalTree.c: In function ‘void boot_Set__IntervalTree(PerlInterpreter*, CV*)’: IntervalTree.c:365: warning: deprecated conversion from string constant to ‘char*’ src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:67: instantiated from here src/interval_tree.h:130: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h:135: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.xs:57: instantiated from here src/interval_tree.h:375: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:375: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:376: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:376: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:377: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:377: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘std::vector<T, std::allocator<_CharT> > IntervalTree<T, N>::fetch(N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.xs:65: instantiated from here src/interval_tree.h:819: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:819: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant IntervalTree.xs:65: instantiated from here src/interval_tree.h:847: error: dependent-name ‘IntervalTree<T,N>::it_recursion_node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:847: note: say ‘typename IntervalTree<T,N>::it_recursion_node’ if a type is meant src/interval_tree.h: In destructor ‘IntervalTree<T, N>::~IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:205: instantiated from here src/interval_tree.h:546: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:546: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::TreeInsertHelp(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:380: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:298: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:298: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:299: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:299: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::LeftRotate(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:395: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:178: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:178: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::RightRotate(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:399: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:240: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:240: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant lipo: can't open input file: /var/tmp//ccLthuaw.out (No such file or directory) make: *** [IntervalTree.o] Error 1 BENBOOTH/Set-IntervalTree-0.01.tar.gz make -- NOT OK Running make test Can't test without successful make Running make install Make had returned bad status, install seems impossible Failed during this command: BENBOOTH/Set-IntervalTree-0.01.tar.gz : make NO

    Read the article

  • Oracle Internet Directory 11gR1 11.1.1.6 Certified with E-Business Suite

    - by Elke Phelps (Oracle Development)
    Oracle E-Business Suite comes with native user authentication and management capabilities out-of-the-box. If you need more-advanced features, it's also possible to integrate it with Oracle Internet Directory and Oracle Single Sign-On or Oracle Access Manager, which allows you to link the E-Business Suite with third-party tools like Microsoft Active Directory, Windows Kerberos, and CA Netegrity SiteMinder.  For details about third-party integration architectures, see either of these article for EBS 11i and 12: In-Depth: Using Third-Party Identity Managers with E-Business Suite Release 12 In-Depth: Using Third-Party Identity Managers with the E-Business Suite Release 11i Oracle Internet Directory 11.1.1.6 is now certified with Oracle E-Business Suite Release 11i, 12.0 and 12.1.  OID 11.1.1.6 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.6.0, also known as FMW 11g Patchset 5.  Certified E-Business Suite releases are: EBS Release 11i 11.5.10.2 + ATG PH.H RUP 7 and higher EBS Release 12.0.6 and higher EBS Release 12.1.1 and higher Supported Configurations Oracle Internet Directory 11.1.1.5.0 can be integrated with two single sign-on solutions for EBS environments: Oracle Internet Directory and Directory Integration Platform from Fusion Middleware 11gR1 Patchset 5 (11.1.1.6.0) with Oracle Access Manager 10g (10.1.4.3) with an existing Oracle E-Business Suite system (Release 11i or 12.1.x). Oracle Internet Directory and Directory Integration Platform from Fusion Middleware 11gR1 Patchset 5 (11.1.1.6.0) with Oracle Access Manager 11gR1 (11.1.1.5) with an existing Oracle E-Business Suite system (Release 12.0.6 or higher or 12.1.x). Oracle Internet Directory (OID) and Directory Integration Platform (DIP) from Oracle Fusion Middleware 11gR1 Patchset 5  (11.1.1.6.0) with Oracle Single Sign-On Server and Oracle Delegated Administration Services Release 10g (10.1.4.3.0) with an existing Oracle E-Business Suite system (Release 11i, 12.0.6 or 12.1.x) Oracle Access Manager strongly recommended Oracle has two single sign-on solutions: Oracle Single Sign-On Server (OSSO) and Oracle Access Manager (OAM). Oracle strongly recommends that all new single sign-on implementations use Oracle Access Manager. Oracle Access Manager is the preferred solution going forward, and forms the basis of Oracle Fusion Middleware 11g. OSSO is no longer being actively developed and will not be ported to Oracle WebLogic Server. Platform certifications Oracle Internet Directory is certified to run on any operating system for which Oracle WebLogic Server 11g is certified. Refer to the Oracle Fusion Middleware 11g System Requirements for more details.For information on operating systems supported by Oracle Internet Directory and its components, refer to the Oracle Identity and Access Management 11gR1 certification matrix.Integration with Oracle Internet Directory involves components spanning several different suites of Oracle products. There are no restrictions on which platform any particular component may be installed so long as the platform is supported for that component.References Overview of Single Sign-On Integration Options for Oracle E-Business Suite Note 1388152.1 Using the Latest Oracle Internet Directory 11gR1 Patchset with Oracle Single Sign-on and Oracle E-Business Suite (Note 876539.1) Integrating Oracle E-Business Suite with Oracle Access Manager 11g using Oracle E-Business Suite AccessGate (Note 1309013.1) Integrating Oracle E-Business Suite with Oracle Access Manager 10g using Oracle E-Business Suite AccessGate (Note 975182.1) Migrating Oracle Single Sign-On 10gR3 to Oracle Access Manager 11g with Oracle E-Business Suite (Note 1304550.1) Oracle Fusion Middleware Download, Installation & Configuration Readme Oracle Fusion Middleware Installation Guide for Oracle Identity Management 11g Release 1 (11.1.1) (Part Number E12002-09) Oracle Fusion Middleware Upgrade Guide for Oracle Identity Management 11g Release 1 (11.1.1) (Part Number E10129-09) Oracle Fusion Middleware Upgrade Planning Guide 11g Release 1 (11.1.1) (Part Number E10125-06) Oracle Fusion Middleware Patching Guide 11g Release 1 (11.1.1) (Part Number E16793-12) Related Articles Understanding Options for Integrating Oracle Access Manager with E-Business Suite In-Depth: Using Third-Party Identity Managers with E-Business Suite Release 12 In-Depth: Using Third-Party Identity Managers with the E-Business Suite Release 11i Oracle Access Manager 10gR3 Certified with E-Business Suite Portal 11.1.1.4 Certified with E-Business Suite Discoverer 11.1.1.4 Certified with E-Business Suite

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • Calling XAI Inbound Services from Oracle BI Publisher

    - by ACShorten
    Note: This technique requires Oracle BI Publisher 1.1.3.4.1 which supports Service Complex Types. Web Services require credentials for authentication. Note: The deafults for the product installation are used in this article. If your site uses alternative values then substitute those alternatives where applicable. Note: Examples shown in this article are examples for illustrative purposes only. When building a report in Oracle BI Publisher it may be necessary to call an XAI Inbound Service to get information via the object rather than directly calling the database tables for various reasons: The CLOB fields used in the Object are accessible for a report. Note: CLOB fields cannot be used as criteria in the current release. Objects can take advantage of algorithms to format or calculate additional data that is not stored in the database directly. For example, Information format strings can automatically generated by the object which gives consistent information between a report and the online screens. To use this facility the following process must be performed: Ensure that the product group, cisusers by default, is enabled for the SPLServiceBean in the console. This allows BI Publisher access to call Web Services directly. To ensure this follow the instructions below: Logon to the Oracle WebLogic server console using an appropriate administrator account. By default the user system or weblogic is provided for this purpose. Navigate to the Security Realms section and select your configured realm. This is set to myrealm by default. In the Roles and Policies section, expand the SPLService section of the Deployments option to reveal the SPLServiceBean roles. If there is no role associated with the SPLServiceBean, create a new EJB role and specify the cisusers role, by default. For example:   Add a Role Condition to the role just created, with a Predicate List of Group and specify cisusers as the Group Argument Name. For example: Save all your changes. The XAI Inbound Services to be used by BI Publisher must be defined prior to using the interface. Refer to the XAI Best Practices (Doc Id: 942074.1) from My Oracle Support or via the online help for more information about this process. Inside BI Publisher create your report, according to the BI Publisher documentation. When specifying the dataset, under the Data Model Report option, specify the following to use an XAI Inbound Service as a data source: Parameter Comment Type Web Service Complex Type true Username Any valid user name within the product. This user MUST have security access to the objects referenced in the XAI Inbound Service Password Authentication password for Username Timeout Timeout, in seconds, set for the Web Service call. For example 60 seconds. WSDL URL Use the WSDL URL on the XAI Inbound Service definition as your WSDL URL. It will be in the following format by default:http://<host>:<port>/<server>/XAIApp/xaiserver/<service>?WSDLwhere: <host> - Host Name of Web Application Server <port> - Port allocated to Web Application Server for product access <server> - Server context for server <service> - XAI Inbound Service Name Note: For customers using secure transmission should substitute https instead of http and use the HTTPS port allocated to the product at installation time. Web Service Select the name of the service that shows in the drop-down menu. If no service name shows up, it means that Publisher could not establish a connection with the server or WSDL name provided in the above URL in order to get the service name. See BI Publisher server log for more information. Method Select the name of the Method that shows in the drop-down menu. A method name should show in the Method drop-down menu once the Web Service name is selected. For example: Additionally, filters can be used from the Web Service that can be generated, required or optional, from the WSDL in the Parameter List. For example:

    Read the article

  • Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

    - by darrenm
    Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together. When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it. We start by configuring the zone and specifying an rootzpool resource: # zonecfg -z eizoss Use 'create' to begin configuring a new zone. zonecfg:eizoss> create create: Using system default template 'SYSdefault' zonecfg:eizoss> set zonepath=/zones/eizoss zonecfg:eizoss> set file-mac-profile=fixed-configuration zonecfg:eizoss> add rootzpool zonecfg:eizoss:rootzpool> add storage \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 zonecfg:eizoss:rootzpool> end zonecfg:eizoss> verify zonecfg:eizoss> commit zonecfg:eizoss> Now lets create the pool and specify encryption: # suriadm map \ iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 PROPERTY VALUE mapped-dev /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # echo "zfscrypto" > /zones/p # zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \ /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0 # zpool export eizoss Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install: zoneadm -z eizoss install -x force-zpool-import Configured zone storage resource(s) from: iscsi://zs7120-tvp540-c.uk.oracle.com/luname.naa.600144f09acaacd20000508e64a70001 Imported zone zpool: eizoss_rpool Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install Image: Preparing at /zones/eizoss/root. AI Manifest: /tmp/manifest.xml.ujaq54 SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml Zonename: eizoss Installation: Starting ... Creating IPS image Startup linked: 1/1 done Installing packages from: solaris origin: http://pkg.us.oracle.com/solaris/release/ Please review the licenses for the following packages post-install: consolidation/osnet/osnet-incorporation (automatically accepted, not displayed) Package licenses may be viewed using the command: pkg info --license <pkg_fmri> DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 187/187 33575/33575 227.0/227.0 384k/s PHASE ITEMS Installing new actions 47449/47449 Updating package state database Done Updating image state Done Creating fast lookup database Done Installation: Succeeded Note: Man pages can be obtained by installing pkg:/system/manual done. Done: Installation completed in 929.606 seconds. Next Steps: Boot the zone, then log into the zone console (zlogin -C) to complete the configuration process. Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install That was really all we had to do, when the install is done boot it up as normal. The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases: rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally. # zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local  

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • mnesia primary key

    - by maryjanne
    Hi I have two tables one notes and one tag and I want to make the id from notes primary key to use it in the tag table, but I don't know where I do wrong. My notes id is generate from another table counter, with the function dirty_update_counter. My function for the id_notes from tag looks like this: Fun = fun() -> mnesia:write(#tag{ id_note =0}) end, mnesia:transaction(Fun). generate_Oid(TableName) when is_atom(TableName) -> F = fun() -> [Oid] = mnesia:read(tag, TableName, write), NewId = Oid#tag.id_note+1, New = Oid#tag{id_note = NewId}, mnesia:write(New), NewId end, mnesia:transaction(F). insert_n(N) when is_record(N, note) -> F = fun() -> {atomic, Id} = generate_Oid(note), New = N#note{id = Id}, mnesia:write(New), New end, mnesia:transaction(F). find_n(Id) when is_integer(Id) -> {atomic, [N]} = mnesia:transaction(fun() -> mnesia:read({note, Id}) end), N. But this function don't increment my field id_note from the table tag, despite the fact that in my note table, my id field is incremented from counter table. Thanks in advance for any help.

    Read the article

  • asp.net CustomValidator never fires OnServerValidate

    - by Bryce Fischer
    I have the following ASP page: <asp:Content ID="Content2" ContentPlaceHolderID="ShellContent" runat="server"> <form runat="server" id="AddNewNoteForm" method="post""> <fieldset id="NoteContainer"> <legend>Add New Note</legend> <asp:ValidationSummary ID="ValidationSummary1" runat="server" /> <div class="ctrlHolder"> <asp:Label ID="LabelNoteDate" runat="server" Text="Note Date" AssociatedControlID="NoteDateTextBox"></asp:Label> <asp:TextBox ID="NoteDateTextBox" runat="server" class="textInput" CausesValidation="True" ></asp:TextBox> <asp:CustomValidator ID="CustomValidator1" runat="server" ErrorMessage="CustomValidator" ControlToValidate="NoteDateTextBox" OnServerValidate="CustomValidator1_ServerValidate" Display="Dynamic" >*</asp:CustomValidator> </div> <div class="ctrlHolder"> <asp:Label ID="LabelNoteText" AssociatedControlID="NoteTextTextBox" runat="server" Text="Note"></asp:Label> <asp:TextBox ID="NoteTextTextBox" runat="server" Height="102px" TextMode="MultiLine" class="textInput" ></asp:TextBox> <asp:RequiredFieldValidator ID="RequiredFieldValidator2" runat="server" ErrorMessage="Note Text is Required" ControlToValidate="NoteTextTextBox">*</asp:RequiredFieldValidator> </div> <div class="buttonHolder"> <asp:Button ID="OkButton" runat="server" Text="Add New Note" CssClass="primaryAction" onclick="OkButton_Click"/> <asp:HyperLink ID="HyperLink1" runat="server">Cancel</asp:HyperLink> </div> </fieldset> </form> </asp:Content> and the following code behind for the CustomValidator1_ServerValidate() method: protected void CustomValidator1_ServerValidate(object source, ServerValidateEventArgs args) { if (string.IsNullOrEmpty(args.Value.Trim())) { args.IsValid = false; CustomValidator1.ErrorMessage = "Note Date is Required!"; return; } DateTime testDate; if (DateTime.TryParse(args.Value, out testDate)) { args.IsValid = true; CustomValidator1.ErrorMessage = "Invalid Date!"; } } It never seems to fail validation no matter what I put in the text box... Should mention this is ASP.NET 2.0

    Read the article

  • Midi Message need help

    - by Rinaldi
    hello sir.. i need your help Sir.. how to interpret dwParam1 from delegate midiInProc into midi status message like note-off, or note-on, control change.. because as long i try dwParam1 is 254, and is not equal to note-off or else...

    Read the article

  • ERROR: Failed to build gem native extension on Mavericks

    - by Kyle Decot
    I'm attempting to run bundle in my Rails project on OSX 10.9. It fails when getting to the pg gem with this error: Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /Users/kyledecot/.rvm/rubies/ruby-2.0.0-p247/bin/ruby extconf.rb checking for pg_config... no No pg_config... trying anyway. If building fails, please try again with --with-pg-config=/path/to/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... yes checking for PQisthreadsafe()... yes checking for PQprepare()... yes checking for PQexecParams()... yes checking for PQescapeString()... yes checking for PQescapeStringConn()... yes checking for PQescapeLiteral()... yes checking for PQescapeIdentifier()... yes checking for PQgetCancel()... yes checking for lo_create()... yes checking for pg_encoding_to_char()... yes checking for pg_char_to_encoding()... yes checking for PQsetClientEncoding()... yes checking for PQlibVersion()... yes checking for PQping()... yes checking for PQsetSingleRowMode()... yes checking for rb_encdb_alias()... yes checking for rb_enc_alias()... no checking for rb_thread_call_without_gvl()... yes checking for rb_thread_call_with_gvl()... yes checking for rb_thread_fd_select()... yes checking for rb_w32_wrap_io_handle()... no checking for PGRES_COPY_BOTH in libpq-fe.h... no checking for PGRES_SINGLE_TUPLE in libpq-fe.h... no checking for PG_DIAG_TABLE_NAME in libpq-fe.h... no checking for struct pgNotify.extra in libpq-fe.h... yes checking for unistd.h... yes checking for ruby/st.h... yes creating extconf.h creating Makefile make "DESTDIR=" compiling gvl_wrappers.c clang: warning: argument unused during compilation: '-fno-fast-math' compiling pg.c clang: warning: argument unused during compilation: '-fno-fast-math' pg.c:272:9: warning: implicit declaration of function 'PQlibVersion' is invalid in C99 [-Wimplicit-function-declaration] return INT2NUM(PQlibVersion()); ^ In file included from pg.c:48: In file included from ./pg.h:17: In file included from /Users/kyledecot/.rvm/rubies/ruby-2.0.0-p247/include/ruby-2.0.0/ruby.h:33: /Users/kyledecot/.rvm/rubies/ruby-2.0.0-p247/include/ruby-2.0.0/ruby/ruby.h:1167:21: note: instantiated from: # define INT2NUM(v) INT2FIX((int)(v)) ^ pg.c:272:9: note: instantiated from: return INT2NUM(PQlibVersion()); ^ pg.c:272:17: note: instantiated from: return INT2NUM(PQlibVersion()); ^ pg.c:375:48: error: use of undeclared identifier 'PQPING_OK' rb_define_const(rb_mPGconstants, "PQPING_OK", INT2FIX(PQPING_OK)); ^ pg.c:375:56: note: instantiated from: rb_define_const(rb_mPGconstants, "PQPING_OK", INT2FIX(PQPING_OK)); ^ pg.c:377:52: error: use of undeclared identifier 'PQPING_REJECT' rb_define_const(rb_mPGconstants, "PQPING_REJECT", INT2FIX(PQPING_REJECT)); ^ pg.c:377:60: note: instantiated from: rb_define_const(rb_mPGconstants, "PQPING_REJECT", INT2FIX(PQPING_REJECT)); ^ pg.c:379:57: error: use of undeclared identifier 'PQPING_NO_RESPONSE' rb_define_const(rb_mPGconstants, "PQPING_NO_RESPONSE", INT2FIX(PQPING_NO_RESPONSE)); ^ pg.c:379:65: note: instantiated from: rb_define_const(rb_mPGconstants, "PQPING_NO_RESPONSE", INT2FIX(PQPING_NO_RESPONSE)); ^ pg.c:381:56: error: use of undeclared identifier 'PQPING_NO_ATTEMPT' rb_define_const(rb_mPGconstants, "PQPING_NO_ATTEMPT", INT2FIX(PQPING_NO_ATTEMPT)); ^ pg.c:381:64: note: instantiated from: rb_define_const(rb_mPGconstants, "PQPING_NO_ATTEMPT", INT2FIX(PQPING_NO_ATTEMPT)); ^ 1 warning and 4 errors generated. make: *** [pg.o] Error 1 Gem files will remain installed in /Users/kyledecot/.rvm/gems/ruby-2.0.0-p247@skateboxes/gems/pg-0.17.0 for inspection. Results logged to /Users/kyledecot/.rvm/gems/ruby-2.0.0-p247@skateboxes/gems/pg-0.17.0/ext/gem_make.out An error occurred while installing pg (0.17.0), and Bundler cannot continue. Make sure that `gem install pg -v '0.17.0'` succeeds before bundling.

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Best Of 2010

    - by Mike Dietrich
    Hi there, in Australia, Japan, Singapore and many other countries it's already 2011 - but Germany and the US is still some time until midnight :-) To round up the year you'll find a few off-topic pictures from 2010. You might click on the pictures to get a better resolution. Enjoy ... Moscow - Red Square Tokyo Train - Cell Phone Mania Great Chinese Wall near Beijing Hong Kong by Night Yearing Station Winery, Yarra - Victoria, Australia Dublin, Ireland - during the ash cloud - no comment - Liberty It's sometime foggy in SF Singapore Opera Stockholm - Gamla Stan Unbelievable white beach at Camps Bay, Clifton, Capetown Words fail me ... Mike

    Read the article

  • mutidimensional array from javascript/jquery to ruby/sinatra

    - by user199368
    Hi, how do I pass a 2-dimensional array from javascript to ruby, please? I have this on client side: function send_data() { var testdata = { "1": { "name": "client_1", "note": "bigboy" }, "2": { "name": "client_2", "note": "smallboy" } } console.log(testdata); $.ajax({ type: 'POST', url: 'test', dataType: 'json', data: testdata }); } and this on server side: post '/test' do p params end but I can't get it right. The best I could get on server side is something like {"1"=>"[object Object]", "2"=>"[object Object]"} I tried to add JSON.stringify on client side and JSON.parse on server side, but the first resulted in {"{\"1\":{\"name\":\"client_1\",\"note\":\"bigboy\"},\"2\":{\"name\":\"client_2\",\"note\":\"smallboy\"}}"=>nil} while the latter has thrown a TypeError - can't convert Hash into String. Could anyone help, or maybe post a short snippet of correct code, please? Thank you

    Read the article

  • Django1.1 model field value preprocessing before returning

    - by Satoru.Logic
    Hi, all. I have a model class like this: class Note(models.Model): author = models.ForeignKey(User, related_name='notes') content = NoteContentField(max_length=256) NoteContentField is a custom sub-class of CharField that override the to_python method in purpose of doing some twitter-text-conversion processing. class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): value = super(NoteContentField, self).to_python(value) from ..utils import linkify return mark_safe(linkify(value)) However, this doesn't work. When I save a Note object like this: note = Note(author=request.use, content=form.cleaned_data['content']) note.save() The conversed value is saved into the database, which is not what I wanna see. What I'm trying to do is to save the raw content into the database, and only make the conversion when the content attribute is later accessed. Would you please tell me what's wrong with this? Thanks to Pierre and Daniel. I have figured out what's wrong. I thought the text-conversion code should be in either to_python or get_db_prep_value, and that's wrong. I should override both of them, make to_python do the conversion and get_db_prep_value return the unconversed value: from ..utils import linkify class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): self._raw_value = super(NoteContentField, self).to_python(value) return mark_safe(linkify(self._raw_value)) def get_db_prep_value(self, value): return self._raw_value I wonder if there is a better way to implement this?

    Read the article

  • How can a test script inform R CMD check that it should emit a custom message?

    - by mariotomo
    I'm writing a R package (delftfews) here at office. we are using svUnit for unit testing. our process for describing new functionality: we define new unit tests, initially marked as DEACTIVATED; one block of tests at a time we activate them and implement the function described by the tests. almost all the time we have a small amount of DEACTIVATED tests, relative to functions that might be dropped or will be implemented. my problem/question is: can I alter the doSvUnit.R so that R CMD check pkg emits a NOTE (i.e. a custom message "NOTE" instead of "OK") in case there are DEACTIVATED tests? as of now, we see only that the active tests don't give error: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ OK * checking PDF version of manual ... OK which is all right if all tests succeed, but less all right if there are skipped tests and definitely wrong if there are failing tests. In this case, I'd actually like to see a NOTE or a WARNING like the following: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE 6 test(s) were skipped. WARNING 1 test(s) are failing. * checking PDF version of manual ... OK As of now, we have to open the doSvUnit.Rout to check the real test results. I contacted two of the maintainers at r-forge and CRAN and they pointed me to the sources of R, in particular the testing.R script. if I understand it correctly, to answer this question we need patching the tools package: scripts in the tests directory are called using a system call, output (stdout and stderr) go to one single file, there are two possible outcomes: ok or not ok, so I opened a change request on R, proposing something like bit-coding the return status, bit-0 for ERROR (as it is now), bit-1 for WARNING, bit-2 for NOTE. with my modification, it would be easy producing this output: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE - please check doSvUnit.Rout. WARNING - please check doSvUnit.Rout. * checking PDF version of manual ... OK Brian Ripley replied "There are however several packages with properly written unit tests that do signal as required. Please do take this discussion elsewhere: R-bugs is not the place to ask questions." and closed the change request. anybody has hints?

    Read the article

  • jQuery dialog breaking after closing - I'm using dialog destroy

    - by pedalpete
    I've got a few demo videos I've been making as tutorials, and I'm using a link to open a dialog box and put the demo video in that box. I use the same div to show other notes on the page when a user selects to view a complete note. The code I use to show the notes is jQuery('span.Notes').live('click', function(){ var note=jQuery(this).data('note'); jQuery('div#showNote').text(note); jQuery('div#showNote').append(''); jQuery('div#showNote').dialog({ modal: true, close: function(){ jQuery('div#showNote').dialog('destroy').empty(); } }); }); The code I use for the demo videos is VERY similar. jQuery('a.demoVid').click(function(){ var videoUrl=jQuery(this).attr('href'); jQuery('div#showNote').dialog({ modal: true, height: 400, width: 480, close: function(){ jQuery('div#showNote').dialog('destroy').empty(); } }); swfobject.embedSWF(videoUrl,'showNote','480','390','8.0.0'); return false; }); I can click on as many notes as I want, and the dialog opens up and shows the note. However, when I click the demoVid, the dialog opens, but then closing the dialog kills any other 'showNote' dialogs on the page, so I can't open any more notes, or demo videos.

    Read the article

  • What is correct HTTP status code when redirecting to a login page?

    - by PHP_Jedi
    When a user is not logged in and tries to access an page that requires login, what is the correct HTTP status code for a redirect to the login page? I don't feel that any of the 3xx fit that description. 10.3.1 300 Multiple Choices The requested resource corresponds to any one of a set of representations, each with its own specific location, and agent- driven negotiation information (section 12) is being provided so that the user (or user agent) can select a preferred representation and redirect its request to that location. Unless it was a HEAD request, the response SHOULD include an entity containing a list of resource characteristics and location(s) from which the user or user agent can choose the one most appropriate. The entity format is specified by the media type given in the Content- Type header field. Depending upon the format and the capabilities of the user agent, selection of the most appropriate choice MAY be performed automatically. However, this specification does not define any standard for such automatic selection. If the server has a preferred choice of representation, it SHOULD include the specific URI for that representation in the Location field; user agents MAY use the Location field value for automatic redirection. This response is cacheable unless indicated otherwise. 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. 10.3.3 302 Found The requested resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 302 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: RFC 1945 and RFC 2068 specify that the client is not allowed to change the method on the redirected request. However, most existing user agent implementations treat 302 as if it were a 303 response, performing a GET on the Location field-value regardless of the original request method. The status codes 303 and 307 have been added for servers that wish to make unambiguously clear which kind of reaction is expected of the client. 10.3.4 303 See Other The response to the request can be found under a different URI and SHOULD be retrieved using a GET method on that resource. This method exists primarily to allow the output of a POST-activated script to redirect the user agent to a selected resource. The new URI is not a substitute reference for the originally requested resource. The 303 response MUST NOT be cached, but the response to the second (redirected) request might be cacheable. The different URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). Note: Many pre-HTTP/1.1 user agents do not understand the 303 status. When interoperability with such clients is a concern, the 302 status code may be used instead, since most user agents react to a 302 response as described here for 303. 10.3.5 304 Not Modified If the client has performed a conditional GET request and access is allowed, but the document has not been modified, the server SHOULD respond with this status code. The 304 response MUST NOT contain a message-body, and thus is always terminated by the first empty line after the header fields. The response MUST include the following header fields: - Date, unless its omission is required by section 14.18.1 If a clockless origin server obeys these rules, and proxies and clients add their own Date to any response received without one (as already specified by [RFC 2068], section 14.19), caches will operate correctly. - ETag and/or Content-Location, if the header would have been sent in a 200 response to the same request - Expires, Cache-Control, and/or Vary, if the field-value might differ from that sent in any previous response for the same variant If the conditional GET used a strong cache validator (see section 13.3.3), the response SHOULD NOT include other entity-headers. Otherwise (i.e., the conditional GET used a weak validator), the response MUST NOT include other entity-headers; this prevents inconsistencies between cached entity-bodies and updated headers. If a 304 response indicates an entity not currently cached, then the cache MUST disregard the response and repeat the request without the conditional. If a cache uses a received 304 response to update a cache entry, the cache MUST update the entry to reflect any new field values given in the response. 10.3.6 305 Use Proxy The requested resource MUST be accessed through the proxy given by the Location field. The Location field gives the URI of the proxy. The recipient is expected to repeat this single request via the proxy. 305 responses MUST only be generated by origin servers. Note: RFC 2068 was not clear that 305 was intended to redirect a single request, and to be generated by origin servers only. Not observing these limitations has significant security consequences. 10.3.7 306 (Unused) The 306 status code was used in a previous version of the specification, is no longer used, and the code is reserved. 10.3.8 307 Temporary Redirect The requested resource resides temporarily under a different URI. Since the redirection MAY be altered on occasion, the client SHOULD continue to use the Request-URI for future requests. This response is only cacheable if indicated by a Cache-Control or Expires header field. The temporary URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s) , since many pre-HTTP/1.1 user agents do not understand the 307 status. Therefore, the note SHOULD contain the information necessary for a user to repeat the original request on the new URI. If the 307 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. I'm using 302 for now, until I find THE correct answer.

    Read the article

  • How can I do such a typical unittest?

    - by Malcom.Z
    This is a simple structure in my project: MyAPP--- note--- __init__.py views.py urls.py test.py models.py auth-- ... template--- auth--- login.html register.html note--- noteshow.html media--- css--- ... js--- ... settings.py urls.py __init__.py manage.py I want to make a unittest which can test the noteshow page working propeyly or not. The code: from django.test import TestCase class Note(TestCase): def test_noteshow(self): response = self.client.get('/note/') self.assertEqual(response.status_code, 200) self.assertTemplateUsed(response, '/note/noteshow.html') The problem is that my project include an auth mod, it will force the unlogin user redirecting into the login.html page when they visit the noteshow.html. So, when I run my unittest, in the bash it raise an failure that the response.status_code is always 302 instead of 200. All right though through this result I can check the auth mod is running well, it is not like what I want it to be. OK, the question is that how can I make another unittest to check my noteshow.template is used or not? Thanks for all. django version: 1.1.1 python version: 2.6.4 Use Eclipse for MAC OS

    Read the article

  • E.T. Phone "Home" - Hey I've discovered a leak..!

    - by Martin Deh
    Being a member of the WebCenter ATEAM, we are often asked to performance tune a WebCenter custom portal application or a WebCenter Spaces deployment.  Most of the time, the process is pretty much the same.  For example, we often use tools like httpWatch and FireBug to monitor the application, and then perform load tests using JMeter or Selenium.  In addition, there are the fine tuning of the different performance based tuning parameters that are outlined in the documentation and by blogs that have been written by my fellow ATEAMers (click on the "performance" tag in this ATEAM blog).  While performing the load test where the outcome produces a significant reduction in the systems resources (memory), one of the causes that plays a role in memory "leakage" is due to the implementation of the navigation menu UI.  OOTB in both JDeveloper and WebCenter Spaces, there are sample (page) templates that include a "default" navigation menu.  In WebCenter Spaces, this is through the SpacesNavigationModel taskflow region, and in a custom portal (i.e. pageTemplate_globe.jspx) the menu UI is contructed using standard ADF components.  These sample menu UI's basically enable the underlying navigation model to visualize itself to some extent.  However, due to certain limitations of these sample menu implementations (i.e. deeper sub-level of navigations items, look-n-feel, .etc), many customers have developed their own custom navigation menus using a combination of HTML, CSS and JQuery.  While this is supported somewhat by the framework, it is important to know what are some of the best practices in ensuring that the navigation menu does not leak.  In addition, in this blog I will point out a leak (BUG) that is in the sample templates.  OK, E.T. the suspence is killing me, what is this leak? Note: for those who don't know, info on E.T. can be found here In both of the included templates, the example given for handling the navigation back to the "Home" page, will essentially provide a nice little memory leak every time the link is clicked. Let's take a look a simple example, which uses the default template in Spaces. The outlined section below is the "link", which is used to enable a user to navigation back quickly to the Group Space Home page. When you (mouse) hover over the link, the browser displays the target URL. From looking initially at the proposed URL, this is the intended destination.  Note: "home" in this case is the navigation model reference (id), that enables the display of the "pretty URL". Next, notice the current URL, which is displayed in the browser.  Remember, that PortalSiteHome = home.  The other highlighted item adf.ctrl-state, is very important to the framework.  This item is basically a persistent query parameter, which is used by the (ADF) framework to managing the current session and page instance.  Without this parameter present, among other things, the browser back-button navigation will fail.  In this example, the value for this parameter is currently 95K25i7dd_4.  Next, through the navigation menu item, I will click on the Page2 link. Inspecting the URL again, I can see that it reports that indeed the navigation is successful and the adf.ctrl-state is also in the URL.  For those that are wondering why the URL displays Page3.jspx, instead of Page2.jspx. Basically the (file) naming convention for pages created ar runtime in Spaces start at Page1, and then increment as you create additional pages.  The name of the actual link (i.e. Page2) is the page "title" attribute.  So the moral of the story is, unlike design time created pages, run time created pages the name of the file will 99% never match the name that appears in the link. Next, is to click on the quick link for navigating back to the Home page. Quick investigation yields that the navigation was indeed successful.  In the browser's URL there is a home (pretty URL) reference, and there is also a reference to the adf.ctrl-state parameter.  So what's the issue?  Can you remember what the value was for the adf.ctrl-state?  The current value is 3D95k25i7dd_149.  However, the previous value was 95k25i7dd_4.  Here is what happened.  Remember when (mouse) hovering over the link produced the following target URL: http://localhost:8888/webcenter/spaces/NavigationTest/home This is great for the browser as this URL will navigate to the intended targer.  However, what is missing is the adf.ctrl-state parameter.  Since this parameter was not present upon navigation "within" the framework, the ADF framework produced another adf.ctrl-state (object).  The previous adf.ctrl-state basically is orphaned while continuing to be alive in memory.  Note: the auto-creation of the adf.ctrl state does happen initially when you invoke the Spaces application  for the first time.  The following is the line of code which produced the issue: <af:goLink destination="#{boilerBean.globalLogoURIInSpace} ... Here the boilerBean is responsible for returning the "string" url, which in this case is /spaces/NavigationTest/home. Unfortunately, again what is missing is adf.ctrl-state. Note: there are more than one instance of the goLinks in the sample templates. So E.T. how can I correct this? There are 2 simple fixes.  For the goLink's destination, use the navigation model to return the actually "node" value, then use the goLinkPrettyUrl method to add the current adf.ctrl-state: <af:goLink destination="#{navigationContext.defaultNavigationModel.node['home'].goLinkPrettyUrl}"} ... />  Note: the node value is the [navigation model id]  Using a goLink does solve the main issue.  However, since the link basically does a redirect, some browsers like IE will produce a somewhat significant "flash".  In a Spaces application, this may be an annoyance to the users.  Another way to solve the leakage problem, and also remove the flash between navigations is to use a af:commandLink.  For example, here is the code example for this scenario: <af:commandLink id="pt_cl2asf" actionListener="#{navigationContext.processAction}" action="pprnav">    <f:attribute name="node" value="#{navigationContext.defaultNavigationModel.node['home']}"/> </af:commandLink> Here, the navigation node to where home is located is delivered by way of the attribute to the commandLink.  The actual navigation is performed by the processAction, which is needing the "node" value. E.T. OK, you solved the OOTB sample BUG, what about my custom navigation code? I have seen many implementations of creating a navigation menu through custom code.  In addition, there are some blog sites that also give detailed examples.  The majority of these implementations are very similar.  The code usually involves using standard HTML tags (i.e. DIVS, UL, LI, .,etc) and either CSS or JavaScript (JQuery) to produce the flyout/drop-down effect.  The navigation links in these cases are standard <a href... > tags.  Although, this type of approach is not fully accepted by the ADF community, it does work.  The important thing to note here is that the <a> tag value must use the goLinkPrettyURL method of contructing the target URL.  For example: <a href="${contextRoot}${menu.goLinkPrettyUrl}"> The main reason why this type of approach is popular is that links that are created this way (also with using af:goLinks), the pages become crawlable by search engines.  CommandLinks are currently not search friendly.  However, in the case of a Spaces instance this may be acceptable.  So in this use-case, af:commandLinks, which would replace the <a>  (or goLink) tags. The example code given of the af:commandLink above is still valid. One last important item.  If you choose to use af:commandLinks, special attention must be given to the scenario in which java script has been used to produce the flyout effect in the custom menu UI.  In many cases that I have seen, the commandLink can only be invoked once, since there is a conflict between the custom java script with the ADF frameworks own scripting to control the view.  The recommendation here, would be to use a pure CSS approach to acheive the dropdown effects. One very important thing to note.  Due to another BUG, the WebCenter environement must be patched to BP3 (patch  p14076906).  Otherwise the leak is still present using the goLinkPrettyUrl method.  Thanks E.T.!  Now I can phone home and not worry about my application running out of resources due to my custom navigation! 

    Read the article

  • Swing: what to do when a JTree update takes too long and freezes other GUI elements?

    - by java.is.for.desktop
    Hello, everyone! I know that GUI code in Java Swing must be put inside SwingUtilities.invokeAndWait or SwingUtilities.invokeLater. This way threading works fine. Sadly, in my situation, the GUI update it that thing which takes much longer than background thread(s). More specific: I update a JTree with about just 400 entries, nesting depth is maximum 4, so should be nothing scary, right? But it takes sometimes one second! I need to ensure that the user is able to type in a JTextPane without delays. Well, guess what, the slow JTree updates do cause delays for JTextPane during input. It refreshes only as soon as the tree gets updated. I am using Netbeans and know empirically that a Java app can update lots of information without freezing the rest of the UI. How can it be done? NOTE 1: All those DefaultMutableTreeNodes are prepared outside the invokeAndWait. NOTE 2: When I replace invokeAndWait with invokeLater the tree doesn't get updated. NOTE 3: Fond out that recursive tree expansion takes far the most time. NOTE 4: I'm using custom tree cell renderer, will try without and report. NOTE 4a: My tree cell renderer uses a map to cache and reuse created JTextComponents, depending on tree node (as a key). CLUE 1: Wow! Without setting custom cell renderer it's 10 times faster. I think, I'll need few good tutorials on writing custom tree cell renderers.

    Read the article

  • Import/Export HTML5 localStorage data

    - by hamen
    Hi all, I'm working on a simple TODO list app based on localStorage HTML5 feature: http://hamen.github.com/webnotes/ I'm wondering if it's possible to import/export data in some way. How could I provide an "Export note/Import note" feature to make users being able to save their note on their HD and import them in another browser profile? Thanks

    Read the article

  • Getting hibernate to log clob parameters

    - by SCdF
    (see here for the problem I'm trying to solve) How do you get hibernate to log clob values it's going to insert. It is logging other value types, such as Integer etc. I have the following in my log4j config: log4j.logger.net.sf.hibernate.SQL=DEBUG log4j.logger.org.hibernate.SQL=DEBUG log4j.logger.net.sf.hibernate.type=DEBUG log4j.logger.org.hibernate.type=DEBUG Which produces output such as: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 However you'll note that it never displays parameter: 3 which is our clob. What I would really want is something like: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.type.ClobType) binding 'something' to parameter: 3 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 (org.hibernate.type.ClobType) binding 'something else' to parameter: 3 How do I get it to show this in the log?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >