Search Results

Search found 10177 results on 408 pages for 'thumbs db'.

Page 380/408 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • top tweets WebLogic Partner Community – March 2012

    - by JuergenKress
    Send us your tweets @wlscommunity #WebLogicCommunity and follow us on twitter http://twitter.com/wlscommunity PeterPaul ? RT @JDeveloper: EJB 3 Deployment guide for WebLogic Server Version: 10.3.4.0 dlvr.it/1J5VcV Andrejus Baranovskis ?Open ADF PopUp on Page Load fb.me/1Rx9LP3oW Sten Vesterli ? RT @OracleBlogs: Using the Oracle E-Business Suite SDK for Java on ADF Applications ow.ly/1hVKbB <- Neat! No more WS calls Java Buddy ?JavaFX 2.0: Example of MediaPlay java-buddy.blogspot.com/2012/03/javafx… Georges Saab Build improvements coming to #openJDK for #jdk8 mail.openjdk.java.net/pipermail/buil… NetBeans Team Share your #Java experience! JavaOne 2012 India call for papers: ow.ly/9xYg0 GlassFish ? GlassFish 3.1.2 Screencasts & Videos – bit.ly/zmQjn2 chriscmuir ?G+: New blog post: ADF Runtimes vs WLS versions as of JDeveloper 11.1.1.6.0 – bit.ly/y8tkgJ Michael Heinrichs New article: Creating a Sprite Animation with JavaFX blog.netopyr.com/2012/03/09/cre… Oracle WebLogic ? #WebLogic Devcast Webinar Series for March: Enterprise Java Scale Out, JPA, Distributed Grid Data Cache bit.ly/zeUXEV #Coherence Andrejus Baranovskis ?Extending Application Module for ADF BC Proxy User DB Connection fb.me/Bj1hLUqm OTNArchBeat ? Oracle Fusion Middleware on JDK 7 | Mark Nelson bit.ly/w7IroZ OTNArchBeat ? Java Champion Jonas Bonér Explains the Akka Platform bit.ly/x2GbXm Adam Bien ? (Java) FX Experience Tools–Feels Like Native Mac App: FX Experience Tools application comes with a native Mac O… bit.ly/waHF3H GlassFish ? GlassFish new recruit and Eclipse integration progress – bit.ly/y5eEkk JDeveloper & ADF Prototyping ADF Libraries dlvr.it/1Hhnw0 Eric Elzinga ?Oracle Fusion Middleware on JDK 7, bit.ly/xkphFQ ADF EMG ? Working with ADF in Arabic, Hebrew or other right-to-left-written language? Oracle UX asks for your help. groups.google.com/forum/?fromgro… Java ? A simple #JavaFX Login Form with a TRON like effect ow.ly/9n9AG JDeveloper & ADF ? Logging in Oracle ADF Applications dlvr.it/1HZhcX OTNArchBeat ? Oracle Cloud Conference: dates and locations worldwide bit.ly/ywXydR UK Oracle User Group ? Simon Haslam, ACE Director present on #WebLogic for DBAs at #oug_ire2012 j.mp/zG6vz3 @oraclewebcenter @oracleace #dublin Steven Davelaar ? Working with ADF and not a member of ADF EMG? You miss lots of valuable info, join now! sites.google.com/site/oracleemg… Simon Haslam @MaciejGruszka: Oracle plans to provide Forms & Reports plug-in for OVAB next year to help deployment. #ukoug MW SIG GlassFish ? Introducing JSR 357: Social Media API – bit.ly/yC8vez JAX London ? Are you coming to Java EE workshops by @AdamBien at JAX Days? Save £100 by registering today. #jaxdays #javaee jaxdays.com WebLogic Community ?Welcome to our Munich WebLogic 12c Bootcamp in Munich! If you also want to attend a training register for the Community oracle.com/partners/goto/… chriscmuir ? My first webcast for Oracle! (be kind) Basing ADF Business Component View Objects on More that one Entity Object bit.ly/ArKija OTNArchBeat ? Oracle Weblogic Server 12c is available on Oracle Solaris 11 (SPARC and x86) bit.ly/xE3TLg JDeveloper & ADF ? Basing ADF Business Component View Objects on More that one Entity Object – YouTube dlvr.it/1H93Qr OTNArchBeat ? Application-Driven Virtualization with Oracle Virtual Assembly Builder | Ronen Kofman bit.ly/wF1C1N Oracle WebLogic ? Steve Button’s blog: WebLogic Server Singleton Services ow.ly/1hOu4U Barbara Ann May ?@oracledevtools: New update: #NetBeans IDE 7.1.1, with support for #GlassFish 3.1.2 bit.ly/mOLcQd #java #developer OTNArchBeat ? Using Coherence with JDeveloper: bit.ly/AkoEQb WebLogic Community ? WebLogic Partner Community Newsletter February 2012 wp.me/p1LMIb-f3 GlassFish ? GlassFish 3.1.2 – new Podcast episode : bit.ly/wc6oBE Frank Nimphius ?Cool! Open JDeveloper 11.1.1.5, go help–>check for updates. First thing shown is that 11.1.1.6 is available. Never miss a new release Adam Bien ?5 Minutes (Video) With Java EE …Or With NetBeans + GlassFish: This screencast covers a 5-minute development of a… bit.ly/xkOJMf WebLogic Community ? Free Oracle WebLogic Certification Application Grid Implementation Specialist wp.me/p1LMIb-eT OTNArchBeat ?Oracle Coherence: First Steps Using Clusters and Basic API Usage | Ricardo Ferreira bit.ly/yYQ3Wz GlassFish ? JMS 2.0 Early Draft is here – bit.ly/ygT1VN OTNArchBeat ? Exalogic Networking Part 2 | The Old Toxophilist bit.ly/xuYMIi OTNArchBeat ?New Release: GlassFish Server 3.1.2. Read All About It! | Paul Davies bit.ly/AtlGxo Oracle WebLogic ?OTN Virtual Developer Day: #WebLogic 12c & #Coherence ost-conference on-demand page live with bonus #Virtualbox lab – bit.ly/xUy6BJ Oracle WebLogic ? Steve Button’s blog: WebLogic Server 11g (10.3.6) Documentation ow.ly/1hJgUB Lucas Jellema ? Just published an article on the AMIS blog: technology.amis.nl/2012/03/adf-11… ADF 11g – programmatically sorting rich table columns. Java Certification ? New Course! Learn how to create mobile applications using Java ME: bit.ly/xZj1Jh Simon Haslam ? @MaciejGruszka WebLogic 12c can run against 11g domain config without changes …and can rollback to 11. #ukoug MW SIG Justin Kestelyn ? Learn Advanced ADF, free and online bit.ly/wEKSRc WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,OPN,Oracle,Jürgen Kress,WebLogic 12c

    Read the article

  • Solaris 11 pkg fix is my new friend

    - by user12611829
    While putting together some examples of the Solaris 11 Automated Installer (AI), I managed to really mess up my system, to the point where AI was completely unusable. This was my fault as a combination of unfortunate incidents left some remnants that were causing problems, so I tried to clean things up. Unsuccessfully. Perhaps that was a bad idea (OK, it was a terrible idea), but this is Solaris 11 and there are a few more tricks in the sysadmin toolbox. Here's what I did. # rm -rf /install/* # rm -rf /var/ai # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 So far so good. Then comes an oops..... setup-service[168]: cd: /var/ai//service/.conf-templ: [No such file or directory] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This is where you generally say a few things to yourself, and then promise to quit deleting configuration files and directories when you don't know what you are doing. Then you recall that the new Solaris 11 packaging system has some ability to correct common mistakes (like the one I just made). Let's give it a try. # pkg fix installadm Verifying: pkg://solaris/install/installadm ERROR dir: var/ai Group: 'root (0)' should be 'sys (3)' dir: var/ai/ai-webserver Missing: directory does not exist dir: var/ai/ai-webserver/compatibility-configuration Missing: directory does not exist dir: var/ai/ai-webserver/conf.d Missing: directory does not exist dir: var/ai/image-server Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/cgi-bin Missing: directory does not exist dir: var/ai/image-server/images Group: 'root (0)' should be 'sys (3)' dir: var/ai/image-server/logs Missing: directory does not exist dir: var/ai/profile Missing: directory does not exist dir: var/ai/service Group: 'root (0)' should be 'sys (3)' dir: var/ai/service/.conf-templ Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_data Missing: directory does not exist dir: var/ai/service/.conf-templ/AI_files Missing: directory does not exist file: var/ai/ai-webserver/ai-httpd-templ.conf Missing: regular file does not exist file: var/ai/service/.conf-templ/AI.db Missing: regular file does not exist file: var/ai/image-server/cgi-bin/cgi_get_manifest.py Missing: regular file does not exist Created ZFS snapshot: 2012-12-11-21:09:53 Repairing: pkg://solaris/install/installadm Creating Plan (Evaluating mediators): | DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 3/3 0.0/0.0 0B/s PHASE ITEMS Updating modified actions 16/16 Updating image state Done Creating fast lookup database Done In just a few moments, IPS found the missing files and incorrect ownerships/permissions. Instead of reinstalling the system, or falling back to an earlier Live Upgrade boot environment, I was able to create my AI services and now all is well. # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 130/130 264.4/264.4 0B/s PHASE ITEMS Installing new actions 284/284 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11-x86 Image path: /install/solaris11-x86 Refreshing install services Warning: mDNS registry of service solaris11-x86 could not be verified. Creating default-i386 alias Setting the default PXE bootfile(s) in the local DHCP configuration to: bios clients (arch 00:00): default-i386/boot/grub/pxegrub Refreshing install services Warning: mDNS registry of service default-i386 could not be verified. # installadm create-service -n solaris11u1-x86 --imagepath /install/solaris11u1-x86 \ -s [email protected] Warning: Service svc:/network/dns/multicast:default is not online. Installation services will not be advertised via multicast DNS. Creating service from: [email protected] DOWNLOAD PKGS FILES XFER (MB) SPEED Completed 1/1 514/514 292.3/292.3 0B/s PHASE ITEMS Installing new actions 661/661 Updating package state database Done Updating image state Done Creating fast lookup database Done Reading search index Done Updating search index 1/1 Creating i386 service: solaris11u1-x86 Image path: /install/solaris11u1-x86 Refreshing install services Warning: mDNS registry of service solaris11u1-x86 could not be verified. # installadm list Service Name Alias Of Status Arch Image Path ------------ -------- ------ ---- ---------- default-i386 solaris11-x86 on i386 /install/solaris11-x86 solaris11-x86 - on i386 /install/solaris11-x86 solaris11u1-x86 - on i386 /install/solaris11u1-x86 This is way way better than pkgchk -f in Solaris 10. I'm really beginning to like this new IPS packaging system.

    Read the article

  • Editing sqlcmdvariable nodes in SSDT Publish Profile files using msbuild

    - by jamiet
    Publish profile files are a new feature of SSDT database projects that enable you to package up all environment-specific properties into a single file for use at publish time; I have written about them before at Publish Profile Files in SQL Server Data Tools (SSDT) and if it wasn’t obvious from that blog post, I’m a big fan! As I have used Publish Profile files more and more I have realised that there may be times when you need to edit those Publish profile files during your build process, you may think of such an operation as a kind of pre-processor step. In my case I have a sqlcmd variable called DeployTag, it holds a value representing the current build number that later gets inserted into a table using a Post-Deployment script (that’s a technique that I wrote about in Implementing SQL Server solutions using Visual Studio 2010 Database Projects – a compendium of project experiences – search for “Putting a build number into the DB”). Here are the contents of my Publish Profile file (simplified for demo purposes) : Notice that DeployTag defaults to “UNKNOWN”. On my current project we are using msbuild scripts to control what gets built and what I want to do is take the build number from our build engine and edit the Publish profile files accordingly. Here is the pertinent portion of the the msbuild script I came up with to do that:   <ItemGroup>     <Namespaces Include="myns">       <Prefix>myns</Prefix>       <Uri>http://schemas.microsoft.com/developer/msbuild/2003</Uri>     </Namespaces>   </ItemGroup>   <Target Name="UpdateBuildNumber">     <ItemGroup>       <SSDTPublishFiles Include="$(DESTINATION)\**\$(CONFIGURATION)\**\*.publish.xml" />     </ItemGroup>     <MSBuild.ExtensionPack.Xml.XmlFile Condition="%(SSDTPublishFiles.Identity) != ''"                                        TaskAction="UpdateElement"                                        File="%(SSDTPublishFiles.Identity)"                                        Namespaces="@(Namespaces)"                                         XPath="//myns:SqlCmdVariable[@Include='DeployTag']/myns:Value"                                         InnerText="$(BuildNumber)"/>   </Target> The important bits here are the definition of the namespace http://schemas.microsoft.com/developer/msbuild/2003: and the XPath expression //myns:SqlCmdVariable[@Include='DeployTag']/myns:Value: Some extra info: I use a fantastic tool called XMLPad to discover/test XPath expressions, read more at XMLPad – a new tool in my developer utility belt MSBuild.ExtensionPack.Xml.XmlFile is a msbuild task used to edit XML files and is available from Mike Fourie’s MSBuild Extension Pack I’m using a property called $(BuildNumber) to hold the value to substitute into the file and also $(DESTINATION)\**\$(CONFIGURATION)\**\*.publish.xml to define an ItemGroup all of my Publish Profile files. Populating those properties is basic msbuild stuff and is therefore outside the scope of this blog post however if you want to learn more check out MSBuild properties & How To: Use Wildcards to Build All Files in a Directory. Hope this is useful! @Jamiet

    Read the article

  • MySQL server stopped working after upgrade

    - by umpirsky
    I upgraded to 12.04 and my MySQL server just stopped working. It throws: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I tried to reinstall it from software center, but it fails with: Package operation failed The installation or removal of a software package failed. installArchives() failed: Selecting previously unselected package mysql-server. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 243412 files and directories currently installed.) Unpacking mysql-server (from .../mysql-server_5.5.22-0ubuntu1_all.deb) ... Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mysql-server-5.5 mysql-server Error in function: Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured I also tried: $ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: mysql-server-5.5 mysql-server-core-5.5 Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.22-0ubuntu1) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea? EDIT: Crash report is being auto generated. EDIT: After trying and trying I got suggestion to do: #apt-get --purge remove mysql-server-5.1 mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done Virtual packages like 'mysql-server-5.1' can't be removed The following packages will be REMOVED: mysql-server-5.5* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 31.3 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 243407 files and directories currently installed.) Removing mysql-server-5.5 ... Purging configuration files for mysql-server-5.5 ... Processing triggers for ureadahead ... Processing triggers for man-db ... The most important part is: Virtual packages like 'mysql-server-5.1' can't be removed Any idea?

    Read the article

  • WebLogic 12.1.2 launch webcast on-demand & WebLogic Community feedback

    - by JuergenKress
    You missed the WebLogic & Coherence & JDeveloper 12.1.2 launch Webcast? Watch it on-demand: View On-Demand Version Read the Q&A from this Webcast Special thanks for Frank Munz and Simon Haslams our WebLogic Community experts on the phone!Thanks for the community for the great twitter feedback send us your tweets @wlscommunity #WebLogicCommunity WebLogic Community Join the #WebLogic Partner Community for the latest WebLogic 12.1.2 details and upcoming trainings http://www.WeblogicCommunity.com #OracleCAF Oracle WebLogic ?Unified update, patch, install process is a key component in reducing Ops cost in #WebLogic 12c #OracleCAF WebLogic Community Demo time #WebLogic cluster creation in seconds #OracleCAF by @mike_lehmann & Will Lyons #WebLogicCommunity pic.twitter.com/gyb8YqnKco Oracle WebLogic ?Dynamic server clusters to scale apps - coming up in #WebLogic 12c launch. #OracleCAF http://pub.vitrue.com/lBmE Oracle WebLogic ?Key feature of #WebLogic 12.1.2 release: @Oracle Database 12c integration. #OracleCAF #OracleDB OTNArchBeat ?Many tech posts on #weblogic available on #oracleace Rene van Wijk's blog. #OracleCAF http://pub.vitrue.com/O9Cn Frank Munz ?Correct me if I am wrong, but this could be the first WebLogic 12.1.2 training ever: http://www.ausoug.org.au/insync13/insync13-frank-munz.html … Cloud Foundation ?.#WebLogic 12.1.2 deep dive starts NOW during #OracleCAF launch. #Coherence up next in a few minutes. http://pub.vitrue.com/HPHM Maciej Gruszka ?Watch http://www.youtube.com/watch?v=KiCoO_QGBsU&feature=c4-overview&list=UUrEIV9YO17leE9aJWamKEPw … at #WebLogic channel with @dave_cabelus about Elastic JMS Oracle WebLogic ?Pick up the new book by @frankmunz on WLS 12c http://amzn.to/1ceppgZ #WebLogic #OracleCAF OTNArchBeat ?@OTNArchBeat 31 Jul @frankmunz 's #WebLogic YouTube channel >> watch and learn #OracleCAF http://pub.vitrue.com/B4IM WebLogic Community ?@frankmunz WebLogic expert build elastic clouds with #WebLogic http://www.munzandmore.com/blog #OracleCAF #WebLogicCommunity pic.twitter.com/UK5UKjXUVl OTNArchBeat @frankmunz 's blog, covering #weblog #cloud and more #OracleCAF http://pub.vitrue.com/N8ST OTNArchBeat ?oracladmin: @simon_haslam 's Oracle Fusion Middleware blog #OracleCAF #oracleace http://pub.vitrue.com/cwGx Yuri Grinshteyn ?Coherence uses WLS tooling, including deployment, and can be part of the WLS cluster. Well done there. #OracleCAF Maciej Gruszka ?#Coherence 12.1.2 auto updates data grid on changes inside DB thru #GoldenGate HotCache - another cool feature of #OracleCAF Oracle WebLogic ?From #OracleCAF launch: Tight integration tween WLS, #Coherence and #OracleDB. Dynamic clusters, OSS support & more http://pub.vitrue.com/3NL9 OTNArchBeat ?25 recent no-fluff technical articles on Oracle WebLogic #OracleCAF http://pub.vitrue.com/FEG5 Maciej Gruszka ?@dave_cabelus Elastic JMS is my favourite capability of #WebLogic 12.1.2 WebLogic Community ?Dynamic WebLogic Clustering COOL - what is Wour favorite 12.1.2 feature? #OracleCAF #WebLogicCommunity pic.twitter.com/T8lvDMJ1U0 WebLogic Community ?What is the coolest #WebLogic 12.1.2 feature? Let us know @wlscommunity http://weblogiccommunity.com/2013/07/30/launch-webcast-weblogic-coherence-jdeveloper-adf-12-1-2-00-july-31st-2013/ … #WebLogicCommunity Simon Haslam ?I'm speaking(!) on the panel session with @frankmunz & Matt Rosen on the CAF/WebLogic 12.1.2 launch: 6pm UK today https://event.on24.com/eventRegistration/EventLobbyServlet?target=registration.jsp&eventid=651242&partnerref=CAF_Launch_OCOM_07312013&sourcepage=register … Markus Eisele ?#WebLogic 12.1.2 - an Important New Release for Middleware Admins http://bit.ly/1cmtqhX by @simon_haslam OracleEnterpriseMgr ?The JVM diagnostics features of #EM12c are now shown in a demo by @hawkinsg1 at the #OracleCAF launch http://bit.ly/caflaunch Shaun Smith ?Curious about the new #Coherence 12.1.2 GoldenGate HotCache feature? I explain all on youtube: http://www.youtube.com/watch?v=O0TIG3hgbg0&feature=share&list=PLxqhEJ4CA3JtQwuPS8Qmd88lGX-gsIbHV … #OracleCAF Maciej Gruszka ?Try for Yourself -- Download the products Oracle WebLogic 12.1.2: http://www.oracle.com/technetwork/middleware/fusion-middleware/downloads/index.html … Oracle Coherence 12c: http://www.oracle.com/technetwork/middleware/coherence/downloads/index.htm … WebLogic Community ?What is Your favorite feature in #WebLogic 12.1.2 ? cool stuff! #OracleCAF #WebLogicCommunity http://WeblogicCommunity.com pic.twitter.com/xjR05tiaQj We encourage you to learn more about all the products by reviewing the following resources: Try for Yourself -- Download the products Oracle WebLogic 12.1.2 Oracle Coherence 12c Enterprise Manager Developer Tools WebLogic Community blog Learn more Read the Oracle WebLogic Business Whitepaper Read the Oracle Coherence Business Whitepaper Read the Oracle WebLogic and Oracle Database Integration Whitepaper Get Training from Oracle University Check out the Oracle WebLogic YouTube Channel Check out the Oracle Coherence YouTube Channel WebLogic Partner Community Registration The Webcast is available on-demand Watch Webcast Now WebLogic Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Weblogic 12.1.2,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • BIND9 server not responding to external queries

    - by Twitchy
    I have set up a BIND server on my dedicated box which I want to host a nameserver for my domain on. When I use dig @202.169.196.59 nzserver.co.nz locally on the server I get the following response... ; <<>> DiG 9.8.1-P1 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43773 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; ANSWER SECTION: nzserver.co.nz. 3600 IN A 202.169.196.59 ;; AUTHORITY SECTION: nzserver.co.nz. 3600 IN NS ns2.nzserver.co.nz. nzserver.co.nz. 3600 IN NS ns1.nzserver.co.nz. ;; ADDITIONAL SECTION: ns1.nzserver.co.nz. 3600 IN A 202.169.196.59 ns2.nzserver.co.nz. 3600 IN A 202.169.196.59 ;; Query time: 0 msec ;; SERVER: 202.169.196.59#53(202.169.196.59) ;; WHEN: Sat Oct 27 15:40:45 2012 ;; MSG SIZE rcvd: 116 Which is good, and is the output I want. But when simply using dig nzserver.co.nz I get... ; <<>> DiG 9.8.1-P1 <<>> nzserver.co.nz ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 16970 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;nzserver.co.nz. IN A ;; Query time: 308 msec ;; SERVER: 202.169.192.61#53(202.169.192.61) ;; WHEN: Sat Oct 27 17:09:12 2012 ;; MSG SIZE rcvd: 32 And if I use dig @202.169.196.59 nzserver.co.nz on another linux machine I get... ; <<>> DiG 9.7.3 <<>> @202.169.196.59 nzserver.co.nz ; (1 server found) ;; global options: +cmd ;; connection timed out; no servers could be reached Am I doing something wrong here? Port 53 is definitely open. /etc/bind/named.conf.options options { directory "/var/cache/bind"; forwarders { 202.169.192.61; 202.169.206.10; }; listen-on { 202.169.196.59; }; }; /etc/bind/named.conf.local zone "nzserver.co.nz" { type master; file "/etc/bind/nzserver.co.nz.zone"; }; /etc/bind/nzserver.co.nz.zone ; BIND db file for nzserver.co.nz $ORIGIN nzserver.co.nz. @ IN SOA ns1.nzserver.co.nz. mr.steven.french.gmail.com. ( 2012102606 28800 7200 864000 3600 ) NS ns1.nzserver.co.nz. NS ns2.nzserver.co.nz. MX 10 mail.nzserver.co.nz. @ IN A 202.169.196.59 * IN A 202.169.196.59 ns1 IN A 202.169.196.59 ns2 IN A 202.169.196.59 www IN A 202.169.196.59 mail IN A 202.169.196.59

    Read the article

  • Extending QuickBooks Reporting with the QuickBooks ADO.NET Data Provider

    - by dataintegration
    The ADO.NET Provider for QuickBooks comes with several reports you may request from QuickBooks by default. However, there are many more that are not readily available. The ADO.NET Provider for QuickBooks makes it easy for you to create new reports and customize existing ones. In this article, we will illustrate how to create your own report and retrieve it from the Server Explorer in Visual Studio. For this example we will show how to create an Item Profitability Report. Creating the report script file Step 1: Download the sample reports available here. Extract them to a folder of your choice. Step 2: Make a copy of the ReportGeneralSummary.rsd file and rename it to ItemProfitability.rsd. Then open the file in any text editor. Step 3: Open the installation directory of the ADO.NET Provider for QuickBooks. Under the \db\ folder, locate the ReportJob.rsb file. Open this file in another text editor. Note: Although we are using ReportJob.rsb for this example, other reports may be contained in other Report*.rsb files. We recommend consulting the included help file and first locating the Report stored procedure and ReportType you are looking for. Otherwise, you may open each Report*.rsb file and look under the "reporttype" input for the report you are attempting to create. Step 4: First, let's rename the title of ItemProfitability.rsd. Near the top of the file you will see a title and description. Change the title to match the name of the file. Change the description to anything you like. For example: <rsb:info title="ItemProfitability" description="Executes my custom report."> Just below the Title, there are a number of columns. The Id represents the row number. The RowType represents the type of data returned by QuickBooks. The ColumnValue* columns represent all of the column data returned by QuickBooks. In some instances, we may need to add additional ColumnValue columns. Step 5: To add additional ColumnValue columns, simply copy the last column, paste it directly below, and continue increasing the numerical value at end of the attribute name. For example: <attr name="ColumnValue9" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue10" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue11" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> <attr name="ColumnValue12" xs:type="string" readonly="true" required="false" desc="Represents a column of data."/> ... Caution: Do not rename the ColumnValue* definitions themselves. They are generalized so that we can understand each type of report returned by QuickBooks. Renaming them to something other than ColumnValue* will cause your columns to return with null values. Step 6: Now let's update the available inputs for the table. From the ReportJob.rsb file, copy all of the input elements into ItemProfitability under the "Psuedo-Column definitions" comment. You will be replacing the existing input elements in ItemProfitability with inputs from ReportJob. When you are done, it should look like this: <!-- Psuedo-Column definitions --> <input name="reporttype" description="The type of the report." value="ITEMESTIMATESVSACTUALS,ITEMPROFITABILITY,JOBESTIMATESVSACTUALSDETAIL,JOBESTIMATESVSACTUALSSUMMARY,JOBPROFITABILITYDETAIL,JOBPROFITABILITYSUMMARY," default="ITEMESTIMATESVSACTUALS" /> <input name="reportperiod" description="Report date range in the format (fromdate:todate), and either value may be omitted for an open ended range (e.g. 2009-12-25:). Supported date format: yyyy-MM-dd." /> <input name="reportdaterangemacro" description="Use a predefined date range." value="ALL,TODAY,THISWEEK,THISWEEKTODATE,THISMONTH,THISMONTHTODATE,THISQUARTER,THISQUARTERTODATE,THISYEAR,THISYEARTODATE,YESTERDAY,LASTWEEK,LASTWEEKTODATE,LASTMONTH,LASTMONTHTODATE,LASTQUARTER,LASTQUARTERTODATE,LASTYEAR,LASTYEARTODATE,NEXTWEEK,NEXTFOURWEEKS,NEXTMONTH,NEXTQUARTER,NEXTYEAR," default="ALL" /> ... Step 7: Now let's update the operationname attribute. This needs to match the same operationname used by ReportJob. After you have copied the correct value from ReportJob.rsb, the operationname in ItemProfitability should look like so: <rsb:set attr="operationname" value="qbReportJob"/> Step 8: There is one more thing we can do to make this a true Item Profitability report. We can remove the reporttype input and hardcode the value. To do this, copy and paste the rsb:set used for operationname. Then rename the attr and value to match the name and value you want to use. For example: <rsb:set attr="operationname" value="qbReportJob"/> <rsb:set attr="reporttype" value="ITEMPROFITABILITY"/> After this you can remove the input for reporttype. Now that you have your own report file, we can move on to displaying the report in the Visual Studio server explorer. Accessing the report through the Data Provider Step 1: Open Visual Studio. In the Server Explorer, configure a new connection with the QuickBooks Data Provider. Step 2: For the Location connection string property, enter the directory where the new report has been saved to. Step 3: The new report should appear as a new view in the Server Explorer. Let's retrieve data from it. Step 4: You can specify any inputs in the WHERE clause. New Report Example Script To help you get started using this new QuickBooks Data Provider report, you will need to download the QuickBooks ADO.NET Data Provider and the fully functional sample script.

    Read the article

  • Simple Preferred time control using silverlight 3.

    - by mohanbrij
    Here I am going to show you a simple preferred time control, where you can select the day of the week and the time of the day. This can be used in lots of place where you may need to display the users preferred times. Sample screenshot is attached below. This control is developed using Silverlight 3 and VS2008, I am also attaching the source code with this post. This is a very basic example. You can download and customize if further for your requirement if you want. I am trying to explain in few words how this control works and what are the different ways in which you can customize it further. File: PreferredTimeControl.xaml, in this file I have just hardcoded the controls and their positions which you can see in the screenshot above. In this example, to change the start day of the week and time, you will have to go and change the design in XAML file, its not controlled by your properties or implementation classes. You can also customize it to change the start day of the week, Language, Display format, styles, etc, etc. File: PreferredTimeControl.xaml.cs, In this control using the code below, first I am taking all the checkbox from my form and store it in the Global Variable, which I can use across my page. List<CheckBox> checkBoxList; #region Constructor public PreferredTimeControl() { InitializeComponent(); GetCheckboxes();//Keep all the checkbox in List in the Load itself } #endregion #region Helper Methods private List<CheckBox> GetCheckboxes() { //Get all the CheckBoxes in the Form checkBoxList = new List<CheckBox>(); foreach (UIElement element in LayoutRoot.Children) { if (element.GetType().ToString() == "System.Windows.Controls.CheckBox") { checkBoxList.Add(element as CheckBox); } } return checkBoxList; } Then I am exposing the two methods which you can use in the container form to get and set the values in this controls. /// <summary> /// Set the Availability on the Form, with the Provided Timings /// </summary> /// <param name="selectedTimings">Provided timings comes from the DB in the form 11,12,13....37 /// Where 11 refers to Monday Morning, 12 Tuesday Morning, etc /// Here 1, 2, 3 is for Morning, Afternoon and Evening respectively, and for weekdays /// 1,2,3,4,5,6,7 where 1 is for Monday, Tuesday, Wednesday, Thrusday, Friday, Saturday and Sunday respectively /// So if we want Monday Morning, we can can denote it as 11, similarly for Saturday Evening we can write 36, etc /// </param> public void SetAvailibility(string selectedTimings) { foreach (CheckBox chk in checkBoxList) { chk.IsChecked = false; } if (!String.IsNullOrEmpty(selectedTimings)) { string[] selectedString = selectedTimings.Split(','); foreach (string selected in selectedString) { foreach (CheckBox chk in checkBoxList) { if (chk.Tag.ToString() == selected) { chk.IsChecked = true; } } } } } /// <summary> /// Gets the Availibility from the selected checkboxes /// </summary> /// <returns>String in the format of 11,12,13...41,42...31,32...37</returns> public string GetAvailibility() { string selectedText = string.Empty; foreach (CheckBox chk in GetCheckboxes()) { if (chk.IsChecked == true) { selectedText = chk.Tag.ToString() + "," + selectedText; } } return selectedText; }   In my example I am using the matrix format for Day and Time, for example Monday=1, Tuesday=2, Wednesday=3, Thursday = 4, Friday = 5, Saturday = 6, Sunday=7. And Morning = 1, Afternoon =2, Evening = 3. So if I want to represent Morning-Monday I will have to represent it as 11, Afternoon-Tuesday as 22, Morning-Wednesday as 13, etc. And in the other way to set the values in the control I am passing the values in the control in the same format as preferredTimeControl.SetAvailibility("11,12,13,16,23,22"); So this will set the checkbox value for Morning-Monday, Morning-Tuesday, Morning-Wednesday, Morning-Saturday, Afternoon of Tuesday and Afternoon of Wednesday. To implement this control, first I have to import this control in xmlns namespace as xmlns:controls="clr-namespace:PreferredTimeControlApp" and finally put in your page wherever you want, <Grid x:Name="LayoutRoot" Style="{StaticResource LayoutRootGridStyle}"> <Border x:Name="ContentBorder" Style="{StaticResource ContentBorderStyle}"> <controls:PreferredTimeControl x:Name="preferredTimeControl"></controls:PreferredTimeControl> </Border> </Grid> And in the code behind you can just include this code: private void InitializeControl() { preferredTimeControl.SetAvailibility("11,12,13,16,23,22"); } And you are ready to go. For more details you can refer to my code attached. I know there can be even simpler and better way to do this. Let me know if any other ideas. Sorry, Guys Still I have used Silverlight 3 and VS2008, as from the system I am uploading this is still not upgraded, but still you can use the same code with Silverlight 4 and VS2010 without any changes. May be just it will ask you to upgrade your project which will take care of rest. Download Source Code.   Thanks ~Brij

    Read the article

  • Token based Authentication for WCF HTTP/REST Services: Authentication

    - by Your DisplayName here!
    This post shows some of the implementation techniques for adding token and claims based security to HTTP/REST services written with WCF. For the theoretical background, see my previous post. Disclaimer The framework I am using/building here is not the only possible approach to tackle the problem. Based on customer feedback and requirements the code has gone through several iterations to a point where we think it is ready to handle most of the situations. Goals and requirements The framework should be able to handle typical scenarios like username/password based authentication, as well as token based authentication The framework should allow adding new supported token types Should work with WCF web programming model either self-host or IIS hosted Service code can rely on an IClaimsPrincipal on Thread.CurrentPrincipal that describes the client using claims-based identity Implementation overview In WCF the main extensibility point for this kind of security work is the ServiceAuthorizationManager. It gets invoked early enough in the pipeline, has access to the HTTP protocol details of the incoming request and can set Thread.CurrentPrincipal. The job of the SAM is simple: Check the Authorization header of the incoming HTTP request Check if a “registered” token (more on that later) is present If yes, validate the token using a security token handler, create the claims principal (including claims transformation) and set Thread.CurrentPrincipal If no, set an anonymous principal on Thread.CurrentPrincipal. By default, anonymous principals are denied access – so the request ends here with a 401 (more on that later). To wire up the custom authorization manager you need a custom service host – which in turn needs a custom service host factory. The full object model looks like this: Token handling A nice piece of existing WIF infrastructure are security token handlers. Their job is to serialize a received security token into a CLR representation, validate the token and turn the token into claims. The way this works with WS-Security based services is that WIF passes the name/namespace of the incoming token to WIF’s security token handler collection. This in turn finds out which token handler can deal with the token and returns the right instances. For HTTP based services we can do something very similar. The scheme on the Authorization header gives the service a hint how to deal with an incoming token. So the only missing link is a way to associate a token handler (or multiple token handlers) with a scheme and we are (almost) done. WIF already includes token handler for a variety of tokens like username/password or SAML 1.1/2.0. The accompanying sample has a implementation for a Simple Web Token (SWT) token handler, and as soon as JSON Web Token are ready, simply adding a corresponding token handler will add support for this token type, too. All supported schemes/token types are organized in a WebSecurityTokenHandlerCollectionManager and passed into the host factory/host/authorization manager. Adding support for basic authentication against a membership provider would e.g. look like this (in global.asax): var manager = new WebSecurityTokenHandlerCollectionManager(); manager.AddBasicAuthenticationHandler((username, password) => Membership.ValidateUser(username, password));   Adding support for Simple Web Tokens with a scheme of Bearer (the current OAuth2 scheme) requires passing in a issuer, audience and signature verification key: manager.AddSimpleWebTokenHandler(     "Bearer",     "http://identityserver.thinktecture.com/trust/initial",     "https://roadie/webservicesecurity/rest/",     "WFD7i8XRHsrUPEdwSisdHoHy08W3lM16Bk6SCT8ht6A="); In some situations, SAML token may be used as well. The following configures SAML support for a token coming from ADFS2: var registry = new ConfigurationBasedIssuerNameRegistry(); registry.AddTrustedIssuer( "d1 c5 b1 25 97 d0 36 94 65 1c e2 64 fe 48 06 01 35 f7 bd db", "ADFS"); var adfsConfig = new SecurityTokenHandlerConfiguration(); adfsConfig.AudienceRestriction.AllowedAudienceUris.Add( new Uri("https://roadie/webservicesecurity/rest/")); adfsConfig.IssuerNameRegistry = registry; adfsConfig.CertificateValidator = X509CertificateValidator.None; // token decryption (read from config) adfsConfig.ServiceTokenResolver = IdentityModelConfiguration.ServiceConfiguration.CreateAggregateTokenResolver();             manager.AddSaml11SecurityTokenHandler("SAML", adfsConfig);   Transformation The custom authorization manager will also try to invoke a configured claims authentication manager. This means that the standard WIF claims transformation logic can be used here as well. And even better, can be also shared with e.g. a “surrounding” web application. Error handling A WCF error handler takes care of turning “access denied” faults into 401 status codes and a message inspector adds the registered authentication schemes to the outgoing WWW-Authenticate header when a 401 occurs. The next post will conclude with authorization as well as the source code download.   (Wanna learn more about federation, WIF, claims, tokens etc.? Click here.)

    Read the article

  • ASP.NET MVC3 checkbox dropdownlist create [migrated]

    - by user95381
    i'm new in asp.net MVC and I/m use view model to poppulate the dropdown list and group of checkboxes. I use SQL Server 2012, where have many to many relationships between Students - Books; Student - Cities. I need collect StudentName, one city and many books for one student. I have next questions: 1. How can I get the values from database to my StudentBookCityViewModel? 2. How can I save the values to my database in [HttpPost] Create method? Here is the code: MODEL public class Student { public int StudentId { get; set; } public string StudentName { get; set; } public ICollection<Book> Books { get; set; } public ICollection<City> Cities { get; set; } } public class Book { public int BookId { get; set; } public string BookName { get; set; } public bool IsSelected { get; set; } public ICollection<Student> Students { get; set; } } public class City { public int CityId { get; set; } public string CityName { get; set; } public bool IsSelected { get; set; } public ICollection<Student> Students { get; set; } } VIEW MODEL public class StudentBookCityViewModel { public string StudentName { get; set; } public IList<Book> Books { get; set; } public StudentBookCityViewModel() { Books = new[] { new Book {BookName = "Title1", IsSelected = false}, new Book {BookName = "Title2", IsSelected = false}, new Book {BookName = "Title3", IsSelected = false} }.ToList(); } public string City { get; set; } public IEnumerable<SelectListItem> CityValues { get { return new[] { new SelectListItem {Value = "Value1", Text = "Text1"}, new SelectListItem {Value = "Value2", Text = "Text2"}, new SelectListItem {Value = "Value3", Text = "Text3"} }; } } } Context public class EFDbContext : DbContext{ public EFDbContext(string connectionString) { Database.Connection.ConnectionString = connectionString; } public DbSet<Book> Books { get; set; } public DbSet<Student> Students { get; set; } public DbSet<City> Cities { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Book>() .HasMany(x => x.Students).WithMany(x => x.Books) .Map(x => x.MapLeftKey("BookId").MapRightKey("StudentId").ToTable("StudentBooks")); modelBuilder.Entity<City>() .HasMany(x => x.Students).WithMany(x => x.Cities) .Map(x => x.MapLeftKey("CityId").MapRightKey("StudentId").ToTable("StudentCities")); } } Controller public ActionResult Create() { return View(); } [HttpPost] public ActionResult Create() { //I don't understand how I can save values to db context.SaveChanges(); return RedirectToAction("Index"); } View @model UsingEFNew.ViewModels.StudentBookCityViewModel @using (Html.BeginForm()) { Your Name: @Html.TextBoxFor(model = model.StudentName) <div>Genre:</div> <div> @Html.DropDownListFor(model => model.City, Model.CityValues) </div> <div>Books:</div> <div> @for (int i = 0; i < Model.Books.Count; i++) { <div> @Html.HiddenFor(x => x.Books[i].BookId) @Html.CheckBoxFor(x => x.Books[i].IsSelected) @Html.LabelFor(x => x.Books[i].IsSelected, Model.Books[i].BookName) </div> } </div> <div> <input id="btnSubmit" type="submit" value="Submit" /> </div> </div> }

    Read the article

  • Memory Efficient Windows SOA Server

    - by Antony Reynolds
    Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain. Required Software 64-bit JDK SOA Suite If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” This has links to all the required software. If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software. Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux. If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs. Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments. Installation Steps The following flow chart outlines the steps required in installing and configuring SOA Suite. The steps in the diagram are explained below. 64-bit? Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit. Install 64-bit JDK The 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6. Install WebLogic If you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation. SOA Suite Required? If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain. Install SOA Suite Run the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter. Database Available? Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database). Install Database I use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters). Run RCU The SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete. Run Config Wizard The Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected. The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process. If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA. If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”. To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer. Installation Issues I had a few problems when I came to test everything in my mega-JVM. Following applications were not targeted and so I needed to target them at the AdminServer: b2bui composer Healthcare UI FMW Welcome Page Application (11.1.0.0.0) How Memory Efficient is It? On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below: set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048m set PORT_MEM_ARGS=-Xms1536m -Xmx2048m set DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m set PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m I arrived at these numbers by monitoring JVM memory usage in JConsole. Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM. Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

    Read the article

  • Man pages not finding entry

    - by Mike
    So, I'm not sure what is going on with my system (ubuntu 12.04), but my man pages do not seem to be working. I try man gcc and get the following response No manual entry for gcc See 'man 7 undocumented' for help when manual pages are not available. However I see the man entry in /usr/share/man/man1/gcc.1.gz Here is what my /etc/manpath.config file looks like # manpath.config # # This file is used by the man-db package to configure the man and cat paths. # It is also used to provide a manpath for those without one by examining # their PATH environment variable. For details see the manpath(5) man page. # # Lines beginning with `#' are comments and are ignored. Any combination of # tabs or spaces may be used as `whitespace' separators. # # There are three mappings allowed in this file: # -------------------------------------------------------- # MANDATORY_MANPATH manpath_element # MANPATH_MAP path_element manpath_element # MANDB_MAP global_manpath [relative_catpath] #--------------------------------------------------------- # every automatically generated MANPATH includes these fields # #MANDATORY_MANPATH /usr/src/pvm3/man # MANDATORY_MANPATH /usr/man MANDATORY_MANPATH /usr/share/man MANDATORY_MANPATH /usr/local/share/man #--------------------------------------------------------- # set up PATH to MANPATH mapping # ie. what man tree holds man pages for what binary directory. # # *PATH* -> *MANPATH* # MANPATH_MAP /bin /usr/share/man MANPATH_MAP /usr/bin /usr/share/man MANPATH_MAP /sbin /usr/share/man MANPATH_MAP /usr/sbin /usr/share/man MANPATH_MAP /usr/local/bin /usr/local/man MANPATH_MAP /usr/local/bin /usr/local/share/man MANPATH_MAP /usr/local/sbin /usr/local/man MANPATH_MAP /usr/local/sbin /usr/local/share/man MANPATH_MAP /usr/X11R6/bin /usr/X11R6/man MANPATH_MAP /usr/bin/X11 /usr/X11R6/man MANPATH_MAP /usr/games /usr/share/man MANPATH_MAP /opt/bin /opt/man MANPATH_MAP /opt/sbin /opt/man #--------------------------------------------------------- # For a manpath element to be treated as a system manpath (as most of those # above should normally be), it must be mentioned below. Each line may have # an optional extra string indicating the catpath associated with the # manpath. If no catpath string is used, the catpath will default to the # given manpath. # # You *must* provide all system manpaths, including manpaths for alternate # operating systems, locale specific manpaths, and combinations of both, if # they exist, otherwise the permissions of the user running man/mandb will # be used to manipulate the manual pages. Also, mandb will not initialise # the database cache for any manpaths not mentioned below unless explicitly # requested to do so. # # In a per-user configuration file, this directive only controls the # location of catpaths and the creation of database caches; it has no effect # on privileges. # # Any manpaths that are subdirectories of other manpaths must be mentioned # *before* the containing manpath. E.g. /usr/man/preformat must be listed # before /usr/man. # # *MANPATH* -> *CATPATH* # MANDB_MAP /usr/man /var/cache/man/fsstnd MANDB_MAP /usr/share/man /var/cache/man MANDB_MAP /usr/local/man /var/cache/man/oldlocal MANDB_MAP /usr/local/share/man /var/cache/man/local MANDB_MAP /usr/X11R6/man /var/cache/man/X11R6 MANDB_MAP /opt/man /var/cache/man/opt # #--------------------------------------------------------- # Program definitions. These are commented out by default as the value # of the definition is already the default. To change: uncomment a # definition and modify it. # #DEFINE pager pager -s #DEFINE cat cat #DEFINE tr tr '\255\267\264\327' '\055\157\047\170' #DEFINE grep grep #DEFINE troff groff -mandoc #DEFINE nroff nroff -mandoc #DEFINE eqn eqn #DEFINE neqn neqn #DEFINE tbl tbl #DEFINE col col #DEFINE vgrind vgrind #DEFINE refer refer #DEFINE grap grap #DEFINE pic pic -S # #DEFINE compressor gzip -c7 #--------------------------------------------------------- # Misc definitions: same as program definitions above. # #DEFINE whatis_grep_flags -i #DEFINE apropos_grep_flags -iEw #DEFINE apropos_regex_grep_flags -iE #--------------------------------------------------------- # Section names. Manual sections will be searched in the order listed here; # the default is 1, n, l, 8, 3, 0, 2, 5, 4, 9, 6, 7. Multiple SECTION # directives may be given for clarity, and will be concatenated together in # the expected way. # If a particular extension is not in this list (say, 1mh), it will be # displayed with the rest of the section it belongs to. The effect of this # is that you only need to explicitly list extensions if you want to force a # particular order. Sections with extensions should usually be adjacent to # their main section (e.g. "1 1mh 8 ..."). # SECTION 1 n l 8 3 2 3posix 3pm 3perl 5 4 9 6 7 # #--------------------------------------------------------- # Range of terminal widths permitted when displaying cat pages. If the # terminal falls outside this range, cat pages will not be created (if # missing) or displayed. # #MINCATWIDTH 80 #MAXCATWIDTH 80 # # If CATWIDTH is set to a non-zero number, cat pages will always be # formatted for a terminal of the given width, regardless of the width of # the terminal actually being used. This should generally be within the # range set by MINCATWIDTH and MAXCATWIDTH. # #CATWIDTH 0 # #--------------------------------------------------------- # Flags. # NOCACHE keeps man from creating cat pages. #NOCACHE Thanks for any help (p.s. even 'man man' fails) Edit: When I run ls -l /usr/share/man/man1/gcc* I get the following output lrwxrwxrwx 1 root root 12 May 27 15:41 /usr/share/man/man1/gcc.1.gz -> gcc-4.6.1.gz -rw-r--r-- 1 root root 217776 Apr 15 17:34 /usr/share/man/man1/gcc-4.6.1.gz

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 1

    - by AjarnMark
    I am fanatical when it comes to managing the source code for my company.  Everything that we build (in source form) gets put into our source control management system.  And I’m not just talking about the UI and middle-tier code written in C# and ASP.NET, but also the back-end database stuff, which at times has been a pain.  We even script out our Scheduled Jobs and keep a copy of those under source control. The UI and middle-tier stuff has long been easy to manage as we mostly use Visual Studio which has integration with source control systems built in.  But the SQL code has been a little harder to deal with.  I have been doing this for many years, well before Microsoft came up with Data Dude, so I had already established a methodology that, while not as smooth as VS, nonetheless let me keep things well controlled, and allowed doing my database development in my tool of choice, Query Analyzer in days gone by, and now SQL Server Management Studio.  It just makes sense to me that if I’m going to do database development, let’s use the database tool set.  (Although, I have to admit I was pretty impressed with the demo of Juneau that Don Box did at the PASS Summit this year.)  So as I was saying, I had developed a methodology that worked well for us (and I’ll probably outline in a future post) but it could use some improvement. When Solutions and Projects were first introduced in SQL Management Studio, I thought we were finally going to get our same experience that we have in Visual Studio.  Well, let’s say I was underwhelmed by Version 1 in SQL 2005, and apparently so were enough other people that by the time SQL 2008 came out, Microsoft decided that Solutions and Projects would be deprecated and completely removed from a future version.  So much for that idea. Then I came across SQL Source Control from Red-Gate.  I have used several tools from Red-Gate in the past, including my favorites SQL Compare, SQL Prompt, and SQL Refactor.  SQL Prompt is worth its weight in gold, and the others are great, too.  Earlier this year, we upgraded from our earlier product bundles to the new Developer Bundle, and in the process added SQL Source Control to our collection.  I thought this might really be the golden ticket I was looking for.  But my hopes were quickly dashed when I discovered that it only integrated with Microsoft Team Foundation Server and Subversion as the source code repositories.  We have been using SourceGear’s Vault and Fortress products for years, and I wholeheartedly endorse them.  So I was out of luck for the time being, although there were a number of people voting for Vault/Fortress support on their feedback forum (as did I) so I had hope that maybe next year I could look at it again. But just a couple of weeks ago, I was pleasantly surprised to receive notice in my email that Red-Gate had an Early Access version of SQL Source Control that worked with Vault and Fortress, so I quickly downloaded it and have been putting it through its paces.  So far, I really like what I see, and I have been quite impressed with Red-Gate’s responsiveness when I have contacted them with any issues or concerns that I have had.  I have had several communications with Gyorgy Pocsi at Red-Gate and he has been immensely helpful and responsive. I must say that development with SQL Source Control is very different from what I have been used to.  This post is getting long enough, so I’ll save some of the details for a separate write-up, but the short story is that in my regular mode, it’s all about the script files.  Script files are King and you dare not make a change to the database other than by way of a script file, or you are in deep trouble.  With SQL Source Control, you make your changes to your development database however you like.  I still prefer writing most of my changes in T-SQL, but you can also use any of the GUI functionality of SSMS to make your changes, and SQL Source Control “manages” the script for you.  Basically, when you first link your database to source control, the tool generates scripts for every primary object (tables and their indexes are together in one script, not broken out into separate scripts like DB Projects do) and those scripts are checked into your source control.  So, if you needed to, you could still do a GET from your source control repository and build the database from scratch.  But for the day-to-day work, SQL Source Control uses the same technique as SQL Compare to determine what changes have been made to your development database and how to represent those in your repository scripts.  I think that once I retrain myself to just work in the database and quit worrying about having to find and open the right script file, that this will actually make us more efficient. And for deployment purposes, SQL Source Control integrates with the full SQL Compare utility to produce a synchronization script (or do a live sync).  This is similar in concept to Microsoft’s DACPAC, if you’re familiar with that. If you are not currently keeping your database development efforts under source control, definitely examine this tool.  If you already have a methodology that is working for you, then I still think this is worth a review and comparison to your current approach.  You may find it more efficient.  But remember that the version which integrates with Vault/Fortress is still in pre-release mode, so treat it with a little caution.  I have found it to be fairly stable, but there was one bug that I found which had inconvenient side-effects and could have really been frustrating if I had been running this on my normal active development machine.  However, I can verify that that bug has been fixed in a more recent build version (did I mention Red-Gate’s responsiveness?).

    Read the article

  • How to install iodbc in Ubuntu 12.10

    - by Hanynowsky
    I need to install the library iodbc (depends on libodbc2) in my Quantal machine. Yet there is creepy dependency problem. This was replaced somehow by unixodbc which I don't have, installed. Here is what I get when I try to install : sudo apt-get install libiodbc2-dev Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: libiodbc2-dev : Depends: libiodbc2 (= 3.52.7-2build2) but it is not going to be installed Depends: iodbc (= 3.52.7-2build2) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I can tell that iodbc conflicts with odbcinst. Yet, I cannot remove it due to the following: sudo apt-get remove odbcinst Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: akonadi-backend-mysql calligra-l10n-engb kde-l10n-engb kdevelop-l10n kdevelop-php-docs-l10n kdevelop-php-l10n libakonadi-kabc4 libakonadi-notes4 libakonadiprotocolinternals1 libboost-thread1.49.0 libdmtx0a libgpgme++2 libkcalcore4 libkdgantt2 libkholidays4 libkimap4 libkldap4 libkmbox4 libkmime4 libkolabxml0 libkpgp4 libkresources4 libksieve4 libprison0 libqgpgme1 libqrencode3 libxerces-c3.1 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: akonadi-server k3b k3b-i18n katepart kde-runtime kdelibs-bin kdelibs5-plugins kdepim-runtime kdepim-strigi-plugins kdepimlibs-kio-plugins kdoctools kget kmag kmail kmix kmousetool ksystemlog kubuntu-debug-installer language-pack-kde-ar language-pack-kde-en libakonadi-calendar4 libakonadi-contact4 libakonadi-kcal4 libakonadi-kde4 libakonadi-kmime4 libcalendarsupport4 libincidenceeditorsng4 libk3b6 libkabc4 libkactivities-bin libkactivities6 libkalarmcal2 libkatepartinterfaces4 libkcal4 libkcalutils4 libkcddb4 libkde3support4 libkdepim4 libkdepimdbusinterfaces4 libkdewebkit5 libkemoticons4 libkfile4 libkgapi0 libkhtml5 libkio5 libkleo4 libkmanagesieve4 libkmediaplayer4 libknewstuff3-4 libknotifyconfig4 libkolab0 libkonq-common libkonq5abi1 libkontactinterface4 libkparts4 libkpimidentities4 libkpimtextedit4 libkpimutils4 libkprintutils4 libksieveui4 libktexteditor4 libktnef4 libktorrent4 libkworkspace4abi2 libkxmlrpcclient4 libmailcommon4 libmailimporter4 libmailtransport4 libmessagecomposer4 libmessagecore4 libmessagelist4 libmessageviewer4 libmicroblog4 libnepomuk4 libnepomukcore4abi1 libnepomukquery4a libnepomuksync4 libnepomukutils4 libplasma3 libsoprano4 libtemplateparser4 libvirtodbc0 nepomuk-core odbcinst odbcinst1debian2 plasma-scriptengine-javascript qapt-batch soprano-daemon virtuoso-minimal 0 upgraded, 0 newly installed, 89 to remove and 0 not upgraded. After this operation, 126 MB disk space will be freed. Do you want to continue [Y/n]? Other info: Why do the "iodbc" and "libmyodbc" packages conflict with each other? The root problem is that I need to use a new feature introduced in the latest MySQL Workbench builds (DB Migration) which uses odbc for that matter. Here is what Mysql doc says about it : Linux The Migration Wizard uses iODBC as a driver manager for all of its ODBC connections in Linux. This may give you some troubles because most Linux distributions provide ODBC drivers compiled against unixODBC. This is another driver manager not supported by MySQL Workbench so you won’t be able to use those drivers unless you compile them against iODBC. Here’s what you should do. Make sure that you have iODBC installed. If you are using Debian, Ubuntu or another Debian based distro, type this command in your terminal: $ sudo apt-get install iodbc libiodbc2-dev libpq-dev libssl-dev For RPM based distros (RedHat, Fedora, etc.) type this command instead of the previous one: $ sudo yum install iodbc iodbc-dev libpqxx-devel openssl-devel Now we need to install the PostgreSQL ODBC drivers. Download the psqlODBC source tarball from here. Use the latest version available for download. As of this writing the latest version corresponds to the file psqlodbc-09.01.0200.tar.gz. Extract this tarball to a directory in your hard drive and open a terminal and cd into that directory. Type this in the terminal window: $ ./configure --with-iodbc --enable-pthreads $ make $ sudo make install Verify that you have the file psqlodbcw.so in the /usr/local/lib directory. This package seems to pose the problem probably: dpkg -s odbcinst1debian2 Package: odbcinst1debian2 Status: install ok installed Priority: optional Section: libs Installed-Size: 241 Maintainer: Ubuntu Developers <[email protected]> Architecture: amd64 Multi-Arch: same Source: unixodbc Version: 2.2.14p2-5ubuntu4 Replaces: unixodbc (<< 2.1.1-2) Depends: libc6 (>= 2.14), libltdl7 (>= 2.4.2), odbcinst Pre-Depends: multiarch-support Breaks: libiodbc2, libmyodbc (<< 5.1.6-2), odbc-postgresql (<< 1:09.00.0310-1.1), tdsodbc (<< 0.82-8) Conflicts: odbcinst1, odbcinst1debian1 Description: Support library for accessing odbc ini files This package contains the libodbcinst library from unixodbc, a library used by ODBC drivers for reading their configuration settings from /etc/odbc.ini and ~/.odbc.ini. . Also contained in this package are the driver setup plugins, which describe the features supported by individual ODBC drivers. Homepage: http://www.unixodbc.org/ Original-Maintainer: Steve Langasek <[email protected]> I have no broken packages:

    Read the article

  • Nesting Linq-to-Objects query within Linq-to-Entities query –what is happening under the covers?

    - by carewithl
    var numbers = new int[] { 1, 2, 3, 4, 5 }; var contacts = from c in context.Contacts where c.ContactID == numbers.Max() | c.ContactID == numbers.FirstOrDefault() select c; foreach (var item in contacts) Console.WriteLine(item.ContactID); Linq-to-Entities query is first translated into Linq expression tree, which is then converted by Object Services into command tree. And if Linq-to-Entities query nests Linq-to-Objects query, then this nested query also gets translated into an expression tree. a) I assume none of the operators of the nested Linq-to-Objects query actually get executed, but instead data provider for particular DB (or perhaps Object Services) knows how to transform the logic of Linq-to-Objects operators into appropriate SQL statements? b) Data provider knows how to create equivalent SQL statements only for some of the Linq-to-Objects operators? c) Similarly, data provider knows how to create equivalent SQL statements only for some of the non-Linq methods in the Net Framework class library? EDIT: I know only some Sql so I can't be completely sure, but reading Sql query generated for the above code it seems data provider didn't actually execute numbers.Max method, but instead just somehow figured out that numbers.Max should return the maximum value and then proceed to include in generated Sql query a call to TSQL's build-in MAX function. It also put all the values held by numbers array into a Sql query. SELECT CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN '0X0X' ELSE '0X1X' END AS [C1], [Extent1].[ContactID] AS [ContactID], [Extent1].[FirstName] AS [FirstName], [Extent1].[LastName] AS [LastName], [Extent1].[Title] AS [Title], [Extent1].[AddDate] AS [AddDate], [Extent1].[ModifiedDate] AS [ModifiedDate], [Extent1].[RowVersion] AS [RowVersion], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[CustomerTypeID] END AS [C2], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[InitialDate] END AS [C3], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryDesintation] END AS [C4], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryDestination] END AS [C5], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[PrimaryActivity] END AS [C6], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[SecondaryActivity] END AS [C7], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[Notes] END AS [C8], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[RowVersion] END AS [C9], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[BirthDate] END AS [C10], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[HeightInches] END AS [C11], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[WeightPounds] END AS [C12], CASE WHEN (([Project1].[C1] = 1) AND ([Project1].[C1] IS NOT NULL)) THEN [Project1].[DietaryRestrictions] END AS [C13] FROM [dbo].[Contact] AS [Extent1] LEFT OUTER JOIN (SELECT [Extent2].[ContactID] AS [ContactID], [Extent2].[BirthDate] AS [BirthDate], [Extent2].[HeightInches] AS [HeightInches], [Extent2].[WeightPounds] AS [WeightPounds], [Extent2].[DietaryRestrictions] AS [DietaryRestrictions], [Extent3].[CustomerTypeID] AS [CustomerTypeID], [Extent3].[InitialDate] AS [InitialDate], [Extent3].[PrimaryDesintation] AS [PrimaryDesintation], [Extent3].[SecondaryDestination] AS [SecondaryDestination], [Extent3].[PrimaryActivity] AS [PrimaryActivity], [Extent3].[SecondaryActivity] AS [SecondaryActivity], [Extent3].[Notes] AS [Notes], [Extent3].[RowVersion] AS [RowVersion], cast(1 as bit) AS [C1] FROM [dbo].[ContactPersonalInfo] AS [Extent2] INNER JOIN [dbo].[Customers] AS [Extent3] ON [Extent2].[ContactID] = [Extent3].[ContactID]) AS [Project1] ON [Extent1].[ContactID] = [Project1].[ContactID] LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll3].[C1] AS [C1] FROM (SELECT [UnionAll2].[C1] AS [C1] FROM (SELECT [UnionAll1].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable1] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable2]) AS [UnionAll1] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable3]) AS [UnionAll2] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable4]) AS [UnionAll3] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable5]) AS [c]) AS [Limit1] ON 1 = 1 LEFT OUTER JOIN (SELECT TOP (1) [c].[C1] AS [C1] FROM (SELECT [UnionAll7].[C1] AS [C1] FROM (SELECT [UnionAll6].[C1] AS [C1] FROM (SELECT [UnionAll5].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable6] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable7]) AS [UnionAll5] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable8]) AS [UnionAll6] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable9]) AS [UnionAll7] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable10]) AS [c]) AS [Limit2] ON 1 = 1 CROSS JOIN (SELECT MAX([UnionAll12].[C1]) AS [A1] FROM (SELECT [UnionAll11].[C1] AS [C1] FROM (SELECT [UnionAll10].[C1] AS [C1] FROM (SELECT [UnionAll9].[C1] AS [C1] FROM (SELECT 1 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable11] UNION ALL SELECT 2 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable12]) AS [UnionAll9] UNION ALL SELECT 3 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable13]) AS [UnionAll10] UNION ALL SELECT 4 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable14]) AS [UnionAll11] UNION ALL SELECT 5 AS [C1] FROM (SELECT 1 AS X) AS [SingleRowTable15]) AS [UnionAll12]) AS [GroupBy1] WHERE [Extent1].[ContactID] IN ([GroupBy1].[A1], (CASE WHEN ([Limit1].[C1] IS NULL) THEN 0 ELSE [Limit2].[C1] END)) Based on this, is it possible that Linq2Entities provider indeed doesn't execute non-Linq and Linq-to-Object methods, but instead creates equivalent SQL statements for some of them ( and for others it throws an exception )? Thank you in advance

    Read the article

  • Downloading stuff from Oracle: an example

    - by user12587121
    Introduction Oracle has a lot of software on offer.  Components of the stack can evolve at different rates and different versions of the components may be in use at any given time.  All this means that even the process of downloading the bits you need can be somewhat daunting.  Here, by way of example, and hopefully to convince you that there is method in the downloading madness,  we describe how to go about downloading the bits for Oracle Identity Manager  (OIM) 11.1.1.5.Firstly, a couple of preliminary points: Folks with Oracle products already installed and looking for bug fixes, patch bundles or patch sets would go directly to the Oracle support website. This Oracle document is a comprehensive description of the Oracle FMW download process and the licensing that applies to downloaded software.   Downloading Oracle Identity Manager 11.1.1.5     To be sure we download the right versions, first locate the Certification Matrix for OIM 11.1.1.5: first go to the Fusion Certification Page then go to the “System Requirements and Supported Platforms for Oracle Identity and Access Management 11gR1” link. Let’s assume you have a 64 bit Linux Machine and an Oracle database already.  Then our  goal is to end up with a list of files like the following: jdk-6u29-linux-x64.bin                    (Java JDK)V26017-01.zip                             (the Repository Creation Utility to create the DB schemas)wls1035_generic.jar                       (the Weblogic Application Server)ofm_iam_generic_11.1.1.5.0_disk1_1of1.zip (the Identity Managament bits)ofm_soa_generic_11.1.1.5.0_disk1_1of2.zip (the SOA bits)ofm_soa_generic_11.1.1.5.0_disk1_2of2.zip jdevstudio11115install.exe                (optional: JDeveloper IDE)soa-jdev-extension.zip                    (optional: SOA extensions for JDeveloper) Downloading the bits 1.    Download the Java JDK, 64 bit version 1.6.0_24+.2.    Download the RCU: here you will see that the RCU is mentioned on the Identity Management home page but no link is provided.  Do not panic.  Due to the amount and turnover of software available only the latest versions are available for download from the main Oracle site.  Over time software gets moved on to the Oracle edelivery site and it is here that we find the RCU version we require: a.    Go to edelivery: https://edelivery.oracle.com b.    Choose Pack ‘ Oracle Fusion Middleware’ and ‘Linux x86-64’ c.    Click on ‘Oracle Fusion Middleware 11g Media Pack for Linux x86-64’ d.    Download: ‘Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.5.0) for Linux x86’ (V26017.zip) 3.    Download the Weblogic Application Server: WLS 10.3.54.    Download the Oracle Identity Manager bits: one point to clarify here is that currently  the Identity Management bits come in two trains, essentially one for the Directory Services piece and the other for the Access Management and Identity Management parts.  We need to be careful not to confuse the two, in particular to be clear which of the trains is being referred to by  the documentation: a.   So, with this in mind, go to ‘ Oracle Identity and Access Management (11.1.1.5.0)’ and download Disk1. 5.    Download the SOA bits: a.    Go to the edelivery area as for the RCU and download: i.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 1 of 2) ii.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 2 of 2) 6.    You will want to download some development tooling (for plugins or BPEL workflow development): a.    Download Jdeveloper 11.1.1.5 (11.1.1.6 may work but best to stick to the versions that correspond to the WLS version we are using) b.    Go to the site for  SOA tools and download the SOA Composite Editor 11.1.1.5 That’s it, you may proceed to the installation. 

    Read the article

  • Architect Day: Boston - Agenda Update

    - by Bob Rhubart
    Here's the latest information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. When: September 12, 2012 8:30am – 5:00pm Where: Boston Marriott Burlington One Burlington Mall Road Burlington, MA 01803 Register now Agenda Time Session Title Room 8:30 am - 9:00 am Registration and Continental Breakfast Salon E Foyer 9:00 am - 9:15 am Welcome and Opening Comments | Bob Rhubart Salon E 9:15 am - 10:00 am Engineered Systems: Oracle's Vision for the Future | Ralf Dossmann Oracle's Exadata and Exalogic are impressive products in their own right. But working in combination they deliver unparalleled transaction processing performance with up to a 30x increase over existing legacy systems, with the lowest cost of ownership over a 3 or 5 year basis than any other hardware. In this session you'll learn how to leverage Oracle's Engineered Systems within your enterprise to deliver record-breaking performance at the lowest TCO. Salon E 10:00 am - 10:30 am Securing Public and Private Clouds | Anton Nielsen Long before the term "Cloud Computing" existed, Oracle technologies supported and promoted the concept. Centralized data with remote users has been at the core of these technologies for decades. The public cloud, and extending private clouds to the internet, though, has added security challenges never imagined decades ago. This presentation will examine a real life security breach and introduce architecture, technologies and policies to secure public and private clouds.  Salon E 10:30 am - 10:45 am Break 10:45 am - 11:30 am Breakout Sessions (pick one) Cloud Computing - Making IT Simple | Scott Mattoon The road to Cloud Computing is not without a few bumps. This session will help to smooth out your journey by tackling some of the potential complications. We'll examine whether standardization is a prerequisite for the Cloud. We'll look at why refactoring isn't just for application code. We'll check out deployable entities and their simplification via higher levels of abstraction. And we'll close out the session with a look at engineered systems and modular clouds. Salon E Innovations in Grid Computing with Oracle Coherence | Rob Misek Learn how Coherence can increase the availability, scalability and performance of your existing applications with its advanced low-latency data-grid technologies. Also hear some interesting industry-specific use cases that customers had implemented and how Oracle is integrating Coherence into its Enterprise Java stack. Salon C 11:30 am - 12:15 pm Breakout Sessions (pick one) Enterprise Strategy for Cloud Security | Dave Chappelle Security is high on the list of concerns for many organizations as they evaluate their cloud computing options. This session will examine security in the context of the various forms of cloud computing. We'll consider technical and non-technical aspects of security, and discuss several strategies for cloud computing, from both the consumer and producer perspectives. Salon E Oracle Enterprise Manager | Avi Huber Much more than a DB management tool, Oracle Enterprise Manager provides management and monitoring coverage for the entire Oracle stack, and beyond. This session will concentrate on the middleware management functionality in OEM, starting with Real User Experience monitoring, through AppServer management, and into deep-dive Java diagnostics. We’ll discuss Business Driven Application Management (BDAM) and the benefits of top-down monitoring. Lastly, we’ll demonstrate how to trace a specific user experience problem, through a multitier SOA application, to its root cause, deep in the JVM. Salon C 12:15 pm - 1:15 pm Lunch Salon E Foyer 1:15 pm - 2:00 pm Panel Discussion - Q&A with session speakers Salon E 2:00 pm - 2:45 pm Breakout Sessions (pick one) Oracle Cloud Reference Architecture | Anbu Krishnaswamy Cloud initiatives are beginning to dominate enterprise IT roadmaps. Successful adoption of Cloud and the subsequent governance challenges warrant a Cloud reference architecture that is applied consistently across the enterprise. This presentation will answer the important questions: What exactly is a Cloud, why you need it, what changes it will bring to the enterprise, and what are the key capabilities of a Cloud infrastructure are - using Oracle's Cloud Reference Architecture, which is part of the IT Strategies from Oracle (ITSO) Cloud Enterprise Technology Strategy (ETS). Salon E 21st Century SOA | Peter Belknap Service Oriented Architecture has evolved from concept to reality in the last decade. The right methodology coupled with mature SOA technologies has helped customers demonstrate success in both innovation and ROI. In this session you will learn how Oracle SOA Suite's orchestration, virtualization, and governance capabilities provide the infrastructure to run mission critical business and system applications. And we'll take a special look at the convergence of SOA & BPM using Oracle's Unified technology stack. Salon C 2:45 pm - 3:00 pm Break 3:00 pm - 4:00 pm Roundtable Discussion Salon E 4:00 pm - 4:15 pm Closing Comments & Readouts from Roundtables Salon E 4:15 pm - 5:00 pm Networking / Reception Salon E Foyer Note: Session schedule and content subject to change.

    Read the article

  • While installing updates I get "Package operation Failed" for ubuntu 12.04

    - by user54395
    i get the following response in the details:- Please help nstallArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 427340 files and directories currently installed.) Preparing to replace thunderbird-trunk-globalmenu 14.0~a1~hg20120409r9862.91177-0ubuntu1~umd1 (using .../thunderbird-trunk-globalmenu_14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1_i386.deb) ... Unpacking replacement thunderbird-trunk-globalmenu ... Preparing to replace thunderbird-trunk 14.0~a1~hg20120409r9862.91177-0ubuntu1~umd1 (using .../thunderbird-trunk_14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1_i386.deb) ... Unpacking replacement thunderbird-trunk ... Processing triggers for man-db ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for desktop-file-utils ... Setting up crossplatformui (1.0.27) ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 5286 package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.2.0-22-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-22-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-22-generic' make: * [modules] Error 2 dpkg: error processing crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 No apport report written because MaxReports is reached already Setting up thunderbird-trunk (14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1) ... Setting up thunderbird-trunk-globalmenu (14.0~a1~hg20120409r9866.91235-0ubuntu1~umd1) ... Errors were encountered while processing: crossplatformui Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up crossplatformui (1.0.27) ... Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service acpid restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the stop(8) and then start(8) utilities, e.g. stop acpid ; start acpid. The restart(8) utility is also available. acpid stop/waiting acpid start/running, process 5541 package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.2.0-22-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory /usr/src/linux-headers-3.2.0-22-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory/usr/src/linux-headers-3.2.0-22-generic' make: * [modules] Error 2 dpkg: error processing crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2

    Read the article

  • Getting a handle on mobile data

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} written by Ashok Joshi The proliferation of mobile devices in the corporate world is both a blessing as well as a challenge.  Mobile devices improve productivity and the velocity of business for the end users; on the other hand, IT departments need to manage the corporate data and applications that run on these devices. Oracle Database Mobile Server (DMS for short) provides a simple and effective way to deal with the management challenge.  DMS supports data synchronization between a central Oracle database server and data on mobile devices.  It also provides authentication, encryption and application and device management.  Finally, DMS is a highly scalable solution that can be used to manage hundreds of thousands of devices.   Here’s a simplified outline of how such a solution might work. Each device runs local sync and mgmt agents that handle bidirectional data flow with an Oracle enterprise backend, run remote commands, and provide status to the management console. For example, mobile admins could monitor multiple networks of mobile devices, upgrade their software remotely, and even destroy the local database on a compromised device. DMS supports either Oracle Berkeley DB or SQLite for device-local storage, and runs on a wide variety of mobile platforms. The schema for the device-local database is pretty simple – it contains the name of the application that’s installed on the device as well as details such as product name, version number, time of last access etc. Each mobile user has an account on the monitoring system.  DMS supports authentication via the Oracle database authentication mechanisms or alternately, via an external authentication server such as Oracle Identity Management. DMS also provides the option of encrypting the data on disk as well as while it is being synchronized. Whenever a device connects with DMS, it sends the list of all local application changes to the server; the server updates the central repository with this information.  Synchronization can be triggered on-demand, whenever there’s a change on the device (e.g. new application installed or an existing application removed) or via a rule-based schedule (e.g. every Saturday). Synchronization is very fast and efficient, since only the changes are propagated.  This includes resume capability; should synchronization be interrupted for any reason, the next synchronization will resume where the previous synchronization was interrupted. If the device should be lost or stolen, DMS has the capability to remove the applications and/or data from the device. This ability to control access to sensitive data and applications is critical in the corporate environment. The central repository also allows the IT manager to track the kinds of applications that mobile users use and recommend patches and upgrades, while still allowing the mobile user full control over what applications s/he downloads and uses on the device.  This is useful since most devices are used for corporate as well as personal information. In certain restricted use scenarios, the IT manager can also control whether a certain application can be installed on a mobile device.  Should an unapproved application be installed, it can easily be removed the next time the device connects with the central server. Oracle Database mobile server provides a simple, effective and highly secure and scalable solution for managing the data and applications for the mobile workforce.

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Infiniband: a highperformance network fabric - Part I

    - by Karoly Vegh
    Introduction:At the OpenWorld this year I managed to chat with interesting people again - one of them answering Infiniband deepdive questions with ease by coffee turned out to be one of Oracle's IB engineers, Ted Kim, who actually actively participates in the Infiniband Trade Association and integrates Oracle solutions with this highspeed network. This is why I love attending OOW. He granted me an hour of his time to talk about IB. This post is mostly based on that tech interview.Start of the actual post: Traditionally datatransfer between servers and storage elements happens in networks with up to 10 gigabit/seconds or in SANs with up to 8 gbps fiberchannel connections. Happens. Well, data rather trickles through.But nowadays data amounts grow well over the TeraByte order of magnitude, and multisocket/multicore/multithread Servers hunger data that these transfer technologies just can't deliver fast enough, causing all CPUs of this world do one thing at the same speed - waiting for data. And once again, I/O is the bottleneck in computing. FC and Ethernet can't keep up. We have half-TB SSDs, dozens of TB RAM to store data to be modified in, but can't transfer it. Can't backup fast enough, can't replicate fast enough, can't synchronize fast enough, can't load fast enough. The bad news is, everyone is used to this, like back in the '80s everyone was used to start compile jobs and go for a coffee. Or on vacation. The good news is, there's an alternative. Not so-called "bleeding-edge" 8gbps, but (as of now) 56. Not layers of overhead, but low latency. And it is available now. It has been for a while, actually. Welcome to the world of Infiniband. Short history:Infiniband was born as a result of joint efforts of HPAQ, IBM, Intel, Sun and Microsoft. They planned to implement a next-generation I/O fabric, in the 90s. In the 2000s Infiniband (from now on: IB) was quite popular in the high-performance computing field, powering most of the top500 supercomputers. Then in the middle of the decade, Oracle realized its potential and used it as an interconnect backbone for the first Database Machine, the first Exadata. Since then, IB has been booming, Oracle utilizes and supports it in a large set of its HW products, it is the backbone of the famous Engineered Systems: Exadata, SPARC SuperCluster, Exalogic, OVCA and even the new DB backup/recovery box. You can also use it to make servers talk highspeed IP to eachother, or to a ZFS Storage Appliance. Following Oracle's lead, even IBM has jumped the wagon, and leverages IB in its PureFlex systems, their first InfiniBand Machines.IB Structural Overview: If you want to use IB in your servers, the first thing you will need is PCI cards, in IB terms Host Channel Adapters, or HCAs. Just like NICs for Ethernet, or HBAs for FC. In these you plug an IB cable, going to an IB switch providing connection to other IB HCAs. Of course you're going to need drivers for those in your OS. Yes, these are long-available for Solaris and Linux. Now, what protocols can you talk over IB? There's a range of choices. See, IB isn't accepting package loss like Ethernet does, and hence doesn't need to rely on TCP/IP as a workaround for resends. That is, you still can run IP over IB (IPoIB), and that is used in various cases for control functionality, but the datatransfer can run over more efficient protocols - like native IB. About PCI connectivity: IB cards, as you see are fast. They bring low latency, which is just as important as their bandwidth. Current IB cards run at 56 gbit/s. That is slightly more than double of the capacity of a PCI Gen2 slot (of ~25 gbit/s). And IB cards are equipped usually with two ports - that is, altogether you'd need 112 gbit/s PCI slots, to be able to utilize FDR IB cards in an active-active fashion. PCI Gen3 slots provide you with around ~50gbps. This is why the most IB cards are configured in an active-standby way if both ports are used. Once again the PCI slot is the bottleneck. Anyway, the new Oracle servers are equipped with Gen3 PCI slots, an the new IB HCAs support those too. Oracle utilizes the QDR HCAs, running at 40gbp/s brutto, which translates to a 32gbp/s net traffic due to the 10:8 signal-to-data information ratio. Consolidation techniques: Technology never stops to evolve. Mellanox is working on the 100 gbps (EDR) version already, which will be optical, since signal technology doesn't allow EDR to be copper. Also, I hear you say "100gbps? I will never use/need that much". Are you sure? Have you considered consolidation scenarios, where (for example with Oracle Virtual Network) you could consolidate your platform to a high densitiy virtualized solution providing many virtual 10gbps interfaces through that 100gbps? Technology never stops to evolve. I still remember when a 10mbps network was impressively fast. Back in those days, 16MB of RAM was a lot. Now we usually run servers with around 100.000 times more RAM. If network infrastrucure speends could grow as fast as main memory capacities, we'd have a different landscape now :) You can utilize SRIOV as well for consolidation. That is, if you run LDoms (aka Oracle VM Server for SPARC) you do not have to add physical IB cards to all your guest LDoms, and you do not need to run VIO devices through the hypervisor either (avoiding overhead). You can enable SRIOV on those IB cards, which practically virtualizes the PCI bus, and you can dedicate Physical- and Virtual Functions of the virtualized HCAs as native, physical HW devices to your guests. See Raghuram's excellent post explaining SRIOV. SRIOV for IB is supported since LDoms 3.1.  This post is getting lengthier, so I will rename it to Part I, and continue it in a second post. 

    Read the article

  • SOA 11g Technology Adapters – ECID Propagation

    - by Greg Mally
    Overview Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few. Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective. One of these challenges is how to correlate a logical business transaction across SOA component instances. This correlation is typically accomplished via the execution context ID (ECID), but we lose the ECID correlation when the business transaction spans technologies like FTP, database, and files. A new feature has been introduced in the Oracle adapter JCA framework to allow the propagation of the ECID. This feature is available in the forthcoming SOA Suite 11.1.1.7 (PS6). The basic concept of propagating the ECID is to identify somewhere in the payload of the message where the ECID can be stored. Then two Binding Properties, relating to the location of the ECID in the message, are added to either the Exposed Service (left-hand side of composite) or External Reference (right-hand side of composite). This will give the JCA framework enough information to either extract the ECID from or add the ECID to the message. In the scenario of extracting the ECID from the message, the ECID will be used for the new component instance. Where to Put the ECID When trying to determine where to store the ECID in the message, you basically have two options: Add a new optional element to your message schema. Leverage an existing element that is not used in your schema. The best scenario is that you are able to add the optional element to your message since trying to find an unused element will prove difficult in most situations. The schema will be holding the ECID value which looks something like the following: 11d1def534ea1be0:7ae4cac3:13b4455735c:-8000-00000000000002dc Configuring Composite Services/References Now that you have identified where you want the ECID to be stored in the message, the JCA framework needs to have this information as well. The two pieces of information that the framework needs relates to the message schema: The namespace for the element in the message. The XPath to the element in the message. To better understand this, let's look at an example for the following database table: When an Exposed Service is created via the Database Adapter Wizard in the composite, the following schema is created: For this example, the two Binding Properties we add to the ReadRow service in the composite are: <!-- Properties for the binding to propagate the ECID from the database table --> <property name="jca.ecid.nslist" type="xs:string" many="false">  xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadRow"</property> <property name="jca.ecid.xpath" type="xs:string" many="false">  /ns1:EcidPropagationCollection/ns1:EcidPropagation/ns1:ecid</property> Notice that the property called jca.ecid.nslist contains the targetNamespace defined in the schema and the property called jca.ecid.xpath contains the XPath statement to the element. The XPath statement also contains the appropriate namespace prefix (ns1) which is defined in the jca.ecid.nslist property. When the Database Adapter service reads a row from the database, it will retrieve the ECID value from the payload and remove the element from the payload. When the component instance is created, it will be associated with the retrieved ECID and the payload contains everything except the ECID element/value. The only time the ECID is visible is when it is stored safely in the resource technology like the database, a file, or a queue. Simple Database/File/JMS Example This section contains a simplified example of how the ECID can propagate through a database table, a file, and JMS queue. The composite for the example looks like the following: The flow of this example is as follows: Invoke database insert using the insertwithecidbpelprocess_client_ep Service. The InsertWithECIDBPELProcess adds a row to the database via the Database Adapter. The JCA Framework adds the ECID to the message prior to inserting. The ReadRow Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadRowBPELProcess is created and it is associated with the retried ECID. The ReadRowBPELProcess now writes the record to the file system via the File Adapter. The JCA Framework adds the ECID to the message prior to writing the message to file. The ReadFile Service retrieves the record from the file system and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadFileBPELProcess is created and it is associated with the retried ECID. The ReadFileBPELProcess now enqueues the message via the JMS Adapter. The JCA Framework adds the ECID to the message prior to enqueuing the message. The DequeueMessage Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of DequeueMessageBPELProcess is created and it is associated with the retried ECID. The logical flow ends. When viewing the Flow Trace in the Enterprise Manger, you will now see all the instances correlated via ECID: Please check back here when SOA Suite 11.1.1.7 is released for this example. With the example you can run it yourself and reinforce what has been shared in this blog via a hands-on experience. One final note: the contents of this blog may be included in the official SOA Suite 11.1.1.7 documentation, but you will still need to come here to get the example.

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • BizTalk &ndash; Routing failure on Delivery Notifications (BizTalk 2006 R2 to 2013)

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/11/11/biztalk-routing-failure-on-delivery-notifications.aspxThis is a detailed explanation of a something I posted a few month ago on stackoverflow, concerning a weird behavior (a bug, really…) of the delivery notifications in BizTalk. Reminder: what are delivery notifications Mechanism BizTalk has the ability to automatically publish positive acknowledgments (ACK) when it has succeeded transmitting a message or negative acknowledgments (NACK) in case of a transmission failure. Orchestrations can use delivery notifications to subscribe to those ACKs and NACKs in order to know if a message sent on a one-way send port has been successfully transmitted. Delivery Notifications can be “activated” in two ways: The most common and easy way is to set the Delivery Notification property of a logical send port (in the orchestration designer) to Transmitted: Another way is to set the BTS.AckRequired context property of the message to be sent to true: NOTE: fundamentally, those methods are strictly equivalent since the fact of setting the Delivery Notification to Transmitted on the send port only tells BizTalk the BTS.AckRequired context property has to be set to true on the outgoing message. Related context properties ACKs and NACKs have a common set of propoted context properties, which are : Propriété Description AckType Equals ACK when successful or NACK otherwise AckID MessageID of the message concerned by the acknowledgment AckOwnerID InstanceID of the instance associated with the acknowledgment AckSendPortID ID of the send port AckSendPortName Name of the send port AckOutboundTransportLocation URI of the send port AckReceivePortID ID of the port the message came from AckReceivePortName Name of the port the message came from AckInboundTransportLocation URI of the port the message came from Detailed behavior The way Delivery Notifications are handled by BizTalk is peculiar compared to the standard behavior of the Message Box: if no active subscription exists for the acknowledgment, it is simply discarded. The direct consequence of this is that there can be no routing failure for an acknowledgment, and an acknowledgment cannot be suspended. Moreover, when a message is sent to a send port where Delivery Notification = Transmitted, a correlation set is initialized and a correlation token is attached to the message (Context property: CorrelationToken). This correlation token will also be attached to the acknowledgment. So when the acknowledgment is issued, it is automatically routed to the source orchestration. Finally, when a NACK is received by the source orchestration, a DeliveryFailureException is thrown, which can be caught in Catch section. Context of the problem Consider this scenario: In an orchestration, Delivery Notifications are activated on a One-Way send port In case of a transmission failure, the messaging instance is suspended and the orchestration catches an exception (DeliveryFailureException). When the exception is caught, the orchestration does some logging and then terminates (thanks to a Terminate shape). So that leaves only the suspended messaging instance, waiting to be resumed. Symptoms Once the problem that caused the transmission failure is solved, the messaging instance is resumed. Considering what was said in the reminder, we would expect the instance to complete, leaving no active or suspended instance. Nevertheless, the result is that the messaging instance is once more suspended, this time because of a routing failure: The routing failure report shows that the suspended message has the following attached properties: Explanation Those properties clearly indicate that the message being suspended is an acknowledgment (ACK in this case), which was published in the message box and was supended because no subscribers were found. This makes sense, since the source orchestration was terminated before we resumed the messaging instance. So its subscription to the acknowledgments was no longer active when the ACK was published, which explains the routing failure. But this behavior is in direct contradiction with what was said earlier: an acknowledgment must be discarded when no subscriber is found and therefore should not be suspended. Cause It is indeed an outright bug, which appeared with the SP1 of BizTalk 2006 R2 and was never corrected since then: not in the next 4 CUs, not in BizTalk 2009, not in 2010 and not event in 2013 – though I haven’t tested CU1 and CU2 for this last edition, but I bet there is nothing to be expected from those CUs (on this particular point). Side effects This bug can have pretty nasty side effects: this behavior can be propagated to other ports, due to routing mechanisms. For instance: you have configured the ESB Toolkit and have activated the “Enable routing failure for failed messages”. The result will be that the ESB Exception SQL send port will also try and publish ACKs or NACKs concerning its own messaging instances. In itself, this is already messy, but remember that those acknowledgments will also have the source correlation token attached to them… See how far it goes? Well, actually there is more: in SQL send ports, transactions will be rolled back because of the routing failure (I guess it also happens with other adapters - like Oracle, but I haven’t tested them). Again, think of what happens when the send port is the ESB Exception send port: your BizTalk box is going mad, but you have no idea since no exception can be written in the exception database! All of this can be tricky to diagnose, I can tell you that… Solution There is no real solution, only a work-around, but it won’t solve all of the problems and side effects. The idea is to create an orchestration which subscribes to all acknowledgments. That is to say: The message type of the incoming message will be XmlDocument The BTS.AckType property exists The logical receive port will use direct binding By doing so, all acknowledgments will be consumed by an instance of this orchestration, thus avoiding the routing failure. Here is an example of what this orchestration could look like: In order not to pollute the HAT and the DTA Db (after all, this orchestration is only meant to be a palliative to some faulty internal BizTalk mechanism, so there should be no trace of its execution), all tracking must be deactivated:

    Read the article

  • Migrating Core Data to new UIManagedDocument in iOS 5

    - by samerpaul
    I have an app that has been on the store since iOS 3.1, so there is a large install base out there that still uses Core Data loaded up in my AppDelegate. In the most recent set of updates, I raised the minimum version to 4.3 but still kept the same way of loading the data. Recently, I decided it's time to make the minimum version 5.1 (especially with 6 around the corner), so I wanted to start using the new fancy UIManagedDocument way of using Core Data. The issue with this though is that the old database file is still sitting in the iOS app, so there is no migrating to the new document. You have to basically subclass UIManagedDocument with a new model class, and override a couple of methods to do it for you. Here's a tutorial on what I did for my app TimeTag.  Step One: Add a new class file in Xcode and subclass "UIManagedDocument" Go ahead and also add a method to get the managedObjectModel out of this class. It should look like:   @interface TimeTagModel : UIManagedDocument   - (NSManagedObjectModel *)managedObjectModel;   @end   Step two: Writing the methods in the implementation file (.m) I first added a shortcut method for the applicationsDocumentDirectory, which returns the URL of the app directory.  - (NSURL *)applicationDocumentsDirectory {     return [[[NSFileManagerdefaultManager] URLsForDirectory:NSDocumentDirectoryinDomains:NSUserDomainMask] lastObject]; }   The next step was to pull the managedObjectModel file itself (.momd file). In my project, it's called "minimalTime". - (NSManagedObjectModel *)managedObjectModel {     NSString *path = [[NSBundlemainBundle] pathForResource:@"minimalTime"ofType:@"momd"];     NSURL *momURL = [NSURL fileURLWithPath:path];     NSManagedObjectModel *managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL];          return managedObjectModel; }   After that, I need to check for a legacy installation and migrate it to the new UIManagedDocument file instead. This is the overridden method: - (BOOL)configurePersistentStoreCoordinatorForURL:(NSURL *)storeURL ofType:(NSString *)fileType modelConfiguration:(NSString *)configuration storeOptions:(NSDictionary *)storeOptions error:(NSError **)error {     // If legacy store exists, copy it to the new location     NSURL *legacyPersistentStoreURL = [[self applicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTime.sqlite"];          NSFileManager* fileManager = [NSFileManagerdefaultManager];     if ([fileManager fileExistsAtPath:legacyPersistentStoreURL.path])     {         NSLog(@"Old db exists");         NSError* thisError = nil;         [fileManager replaceItemAtURL:storeURL withItemAtURL:legacyPersistentStoreURL backupItemName:niloptions:NSFileManagerItemReplacementUsingNewMetadataOnlyresultingItemURL:nilerror:&thisError];     }          return [superconfigurePersistentStoreCoordinatorForURL:storeURL ofType:fileType modelConfiguration:configuration storeOptions:storeOptions error:error]; }   Basically what's happening above is that it checks for the minimalTime.sqlite file inside the app's bundle on the iOS device.  If the file exists, it tells you inside the console, and then tells the fileManager to replace the storeURL (inside the method parameter) with the legacy URL. This basically gives your app access to all the existing data the user has generated (otherwise they would load into a blank app, which would be disastrous). It returns a YES if successful (by calling it's [super] method). Final step: Actually load this database Due to how my app works, I actually have to load the database at launch (instead of shortly after, which would be ideal). I call a method called loadDatabase, which looks like this: -(void)loadDatabase {     static dispatch_once_t onceToken;          // Only do this once!     dispatch_once(&onceToken, ^{         // Get the URL         // The minimalTimeDB name is just something I call it         NSURL *url = [[selfapplicationDocumentsDirectory] URLByAppendingPathComponent:@"minimalTimeDB"];         // Init the TimeTagModel (our custom class we wrote above) with the URL         self.timeTagDB = [[TimeTagModel alloc] initWithFileURL:url];           // Setup the undo manager if it's nil         if (self.timeTagDB.undoManager == nil){             NSUndoManager *undoManager = [[NSUndoManager  alloc] init];             [self.timeTagDB setUndoManager:undoManager];         }                  // You have to actually check to see if it exists already (for some reason you can't just call "open it, and if it's not there, create it")         if ([[NSFileManagerdefaultManager] fileExistsAtPath:[url path]]) {             // If it does exist, try to open it, and if it doesn't open, let the user (or at least you) know!             [self.timeTagDB openWithCompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Opened the file--it already existed");                     [self refreshData];                 }             }];         }         else {             // If it doesn't exist, you need to attempt to create it             [self.timeTagDBsaveToURL:url forSaveOperation:UIDocumentSaveForCreatingcompletionHandler:^(BOOL success){                 if (!success) {                     // Handle the error.                     NSLog(@"Error opening up the database");                 }                 else{                     NSLog(@"Created the file--it did not exist");                     [self refreshData];                 }             }];         }     }); }   If you're curious what refreshData looks like, it sends out a NSNotification that the database has been loaded: -(void)refreshData {     NSNotification* refreshNotification = [NSNotificationnotificationWithName:kNotificationCenterRefreshAllDatabaseData object:self.timeTagDB.managedObjectContext  userInfo:nil];     [[NSNotificationCenter defaultCenter] postNotification:refreshNotification];     }   The kNotificationCenterRefreshAllDatabaseData is just a constant I have defined elsewhere that keeps track of all the NSNotification names I use. I pass the managedObjectContext of the newly created file so that my view controllers can have access to it, and start passing it around to one another. The reason we do this as a Notification is because this is being run in the background, so we can't know exactly when it finishes. Make sure you design your app for this! Have some kind of loading indicator, or make sure your user can't attempt to create a record before the database actually exists, because it will crash the app.

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >