Search Results

Search found 4103 results on 165 pages for 'party mcfly'.

Page 27/165 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • I can't upgrade to 12.10 Quantal Quetzal

    - by JamesTheAwesomeDude
    After hearing about some of the features in 12.10, I decided to upgrade my Ubuntu from 12.04 LTS to the newer 12.10. However, when I try to upgrade it, I start the update manager, like always, and I start the distro upgrade, just like I do every time there's a new version, but at the end of the "Setting new software channels" stage, it fails, saying: Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: E: Unable to correct problems, you have held broken packages. This can be caused by: Upgrading to a pre-release version of Ubuntu Running the current pre-release version of Ubuntu Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. What should I do? I understand that I have some sort of "broken" package on my system, but how do I identify and remove the problem? Note: At the start of the "Setting new software channels" stage, I was presented with this: Third party sources disabled Some third party entries in your sources.list were disabled. You can re-enable them after the upgrade with the 'software-properties' tool or your package manager.

    Read the article

  • Critical Patch Update For Oracle Fusion Middleware - CPU October 2012

    - by Daniel Mortimer
    The latest Critical Patch Update (CPU) has been released for Oracle products. Start your reading hereCritical Patch Updates, Security Alerts and Third Party Bulletin  This is the home page containing links to all "Critical Patch Updates" released to date, along with sections detailing  Security Alerts  Third Party Bulletin Public Vulnerabilities Fixed Policies Reporting Security Vulnerabilities  On this page you will find the link to the Oracle Critical Patch Update Advisory - October 2012 The advisory lists the support documents that cover the patch availability for all Oracle products. From an Oracle Fusion Middleware perspective, you can cut to the chase by using the links below which take you to the appropriate sections inPatch Set Update and Critical Patch Update October 2012 Availability Document [ID 1477727.1] Oracle Fusion Middleware 11g Release 2  11.1.2.0 Oracle Fusion Middleware 11g Release 1 11.1.1.4 (Portal,Forms,Reports and Discoverer) 11.1.1.5 11.1.1.6 Oracle Application Server 10g Release 3 10.1.3.5 The #anchor links above should work in Firefox and IE provided you have already logged into My Oracle Support within the same browser session. For some reason, Chrome always takes you to the top of the document :-/ Tip: Error Correction Support for Oracle Identity Management 10g, version 10.1.4.x ended in December 2011. For this reason, there is no section which is specific to this version. However, Error Correction Support remains in place, until end of this year, for the Oracle Identity Management 10.1.4.x components Single Sign On (SSO) Delegated Administration Services (OIDDAS) provided you are using them as part of a Single Sign-On solution (OID 11g + SSO / OIDDAS 10.1.4.3) for a Portal / Forms / Reports and Discoverer 11.1.1.x architecture.    As such there are security related patches available for Fusion Middleware Single Sign On. You will find the patch numbers listed in the sections for 11.1.1.4, 11.1.1.5 and 11.1.1.6 And finally, if you are hit any unexpected errors when applying the CPU patches, check out the known issues documented in these two support documents. Critical Patch Update October 2012 Oracle Fusion Middleware Known Issues (Doc ID 1455408.1) Critical Patch Update October 2012 Database Known Issues (Doc ID 1477865.1)

    Read the article

  • Story of success: MySQL Enterprise Backup (MEB) was successfully integrated with IBM Tivoli Storage Manager (TSM) via System Backup to Tape (SBT) interface.

    - by user13334359
    Since version 3.6 MEB supports backups to tape through the SBT interface.The officially supported tool for such backups to tape is Oracle Secure Backup (OSB).But there are a lot of other Storage Managers. MEB allows to use them through the SBT interface. Since version 3.7 it also has option --sbt-environment which allows to pass environment variables, not needed by OSB, to third-party managers. At the same time MEB can not guarantee it would work with all of them.This month we were contacted by a customer who wanted to use IBM Tivoli Storage Manager (TSM) with MEB. We could only say them same thing I wrote in previous paragraph: this solution is supposed to work, but you have to be pioneers of this technology. And they agreed. They agreed to be the pioneers and so the story begins.MEB requires following options to be specified by those who want to connect it to SBT interface:--sbt-database-name: a name which should be handed over to SBT interface. This can be any name. Default, MySQL, works for most cases, so user is not required to specify this option.--sbt-lib-path: path to SBT library. For TSM this library comes with "Data Protection for Oracle", which, in its turn, interfaces with Oracle Recovery Manager (RMAN), which uses SBT interface. So you need to install it even if you don't use Oracle.--sbt-environment: environment for third-party manager. This option is not needed when you use OSB, but almost always necessary for third-party SBT managers. TSM requires variable TDPO_OPTFILE to be set and point to the TSM configuration file.--backup-image=sbt:: path to the image. Prefix "sbt:" indicates that image should be sent through SBT interfaceSo full command in our case would look like: ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user --password=foobar \ --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir backup-to-imageAnd this command results in the following output log: MySQL Enterprise Backup version 3.7.1 [2012/02/16] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ...  ./mysqlbackup --port=3307 --protocol=tcp --user=backup_user         --password=foobar --backup-image=sbt:my-first-backup         --sbt-lib-path=/usr/lib/libobk.so         --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt"         --backup-dir=/path/to/my/dir backup-to-image sbt-environment: 'TDPO_OPTFILE=/path/to/my/tdpo.opt' INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.             At the end of a successful 'backup-to-image' run mysqlbackup             prints "mysqlbackup completed OK!". --------------------------------------------------------------------                        Server Repository Options: --------------------------------------------------------------------   datadir                          =  /path/to/data   innodb_data_home_dir             =  /path/to/data   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/data   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 --------------------------------------------------------------------                        Backup Config Options: --------------------------------------------------------------------   datadir                          =  /path/to/my/dir/datadir   innodb_data_home_dir             =  /path/to/my/dir/datadir   innodb_data_file_path            =  ibdata1:2048M;ibdata2:2048M;ibdata3:64M:autoextend:max:2048M   innodb_log_group_home_dir        =  /path/to/my/dir/datadir   innodb_log_files_in_group        =  2   innodb_log_file_size             =  268435456 Backup Image Path= sbt:my-first-backup mysqlbackup: INFO: Unique generated backup id for this is 13297406400663200 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS is 'Data Protection for Oracle: version 5.5.1.0' 120220 08:54:00 mysqlbackup: INFO: meb_sbt_session_open: MMS version '5.5.1.0' mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 31668381. mysqlbackup: INFO: Starting log scan from lsn 31668224. 120220  8:54:00 mysqlbackup: INFO: Copying log... 120220  8:54:00 mysqlbackup: INFO: Log copied, lsn 31668381.           We wait 1 second before starting copying the data files... 120220  8:54:01 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata1 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:55:30 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata2 (Antelope file format). mysqlbackup: Progress in MB: 200 400 600 800 1000 1200 1400 1600 1800 2000 120220  8:57:18 mysqlbackup: INFO: Copying /path/to/ibdata/ibdata3 (Antelope file format). mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server. 120220 08:57:22 mysqlbackup: INFO: Starting to lock all the tables.... 120220 08:57:22 mysqlbackup: INFO: All tables are locked and flushed to disk mysqlbackup: INFO: Opening backup source directory '/path/to/data/' 120220 08:57:22 mysqlbackup: INFO: Starting to backup all files in subdirectories of '/path/to/data/' mysqlbackup: INFO: Backing up the database directory 'mysql' mysqlbackup: INFO: Backing up the database directory 'test' mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 31668381.           (This is the highest lsn found on page)           Scanned log up to lsn 31670396.           Was able to parse the log up to lsn 31670396.           Maximum page number for a log record 328 120220 08:57:23 mysqlbackup: INFO: All tables unlocked mysqlbackup: INFO: All MySQL tables were locked for 0.000 seconds 120220 08:59:01 mysqlbackup: INFO: meb_sbt_backup_close: blocks: 4162  size: 1048576  bytes: 4363985063 120220  8:59:01 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: MySQL binlog position: filename bin_mysql.001453, position 2105 mysqlbackup: WARNING: backup-image already closed mysqlbackup: INFO: Backup image created successfully.:            Image Path: 'sbt:my-first-backup' -------------------------------------------------------------    Parameters Summary -------------------------------------------------------------    Start LSN                  : 31668224    End LSN                    : 31670396 ------------------------------------------------------------- mysqlbackup completed OK!Backup successfully completed.To restore it you should use same commands like you do for any other MEB image, but need to provide sbt* options as well: $./mysqlbackup --backup-image=sbt:my-first-backup --sbt-lib-path=/usr/lib/libobk.so \ --sbt-environment="TDPO_OPTFILE=/path/to/my/tdpo.opt" --backup-dir=/path/to/my/dir image-to-backup-dirThen apply log as usual: $./mysqlbackup --backup-dir=/path/to/my/dir apply-logThen stop mysqld and finally copy-back: $./mysqlbackup --defaults-file=path/to/my.cnf --backup-dir=/path/to/my/dir copy-back  Disclaimer. This is only story of one success which can be useful for someone else. MEB is not regularly tested and not guaranteed to work with IBM TSM or any other third-party storage manager.

    Read the article

  • JavaOne 2012 Day 1

    - by Geertjan
    Day 1, Sunday, started the night before for those attending the NetBeans Party at Johnny Foley's: Invitations had been sent out prior to the party to all speakers for NetBeans Day, as well as speakers in JavaOne sessions where NetBeans is going to be used. That turns out to be around 40 people, who hung out until quite late, with snacks and drinks. Next day, NetBeans Day had most sessions with completely packed rooms, which means there were around 300 people! Panel discussions around central themes in the NetBeans ecosystem (Java EE, JavaFX, and NetBeans Platform) were held, which resulted in a whole bunch of people up on stage throughout the day, such as this group of speakers in the Java EE session: From left to right above you see Sean Comerford from ESPN.com, John Yeary the Java EE panel moderator and JUG lead from Greenville, Cagatay Civici the PrimeFaces lead developer, Glenn Holmer long time NetBeans enthusiast (more on him below) from the Weyco Group, and NetBeans/Java EE book author David Heffelfinger. There were panels just like the above for JavaFX and the NetBeans Platform too, with very interesting and dynamic talks, such as one by JavaFX book authors Gail and Paul Anderson, who showed off this brilliant JavaFX/NetBeans Platform mashup: NetBeans Day ended with a good discussion about how to get involved in the NetBeans community, wrapping up with an award ceremony with two very special NetBeans community awards: Then everyone caught buses to the Masonic Auditorium, where 4 hours of keynotes took place. This is what the room looked like: The 4 hours ended with a very well received HTML5/NetBeans demo, showing of NetBeans IDE 7.3 features, by NetBeans director John Ceccarelli. And I liked this slide during an earlier keynote session by Oracle VP Hasan Rizvi: There was really a lot of love for NetBeans during the JavaOne keynote sessions and I don't remember hearing any other IDE being mentioned, in any way at all. Next there was the Duke's Choice Award ceremony, outside the Hilton in a cool lounge area, where, among others, Timon and Angelo from the NetBeans Platform community received their awards for AgroSense and MICE. In between all of the above, I met very many friends from previous conferences, as well as several new ones. It was clearly a great start to the conference. Looking forward to what the rest of the week will bring!

    Read the article

  • Opportunity Nokia's

    - by Andrew Clarke
    Nokia’s alliance with Microsoft is likely to be good news for anyone using Microsoft technologies, and particularly for .NET developers. Before the announcement, the future wasn’t looking so bright for the ‘mobile’ version of Windows, Windows Phone. Microsoft currently has only 3.1% of the Smartphone market, even though it has been involved in it for longer than its main rivals. Windows Phone has now got the basics right, but that is hardly sufficient by itself to change its predicament significantly. With Nokia's help, it is possible. Despite the promise of multi-tasking for third party apps, integration with Microsoft platforms such as Xbox and Office, direct integration of Twitter support, and the introduction of IE 9 “later this year”, there have been frustratingly few signs of urgency on Microsoft’s part in improving the Windows Phone  product. Until this happens, there seems little prospect of reward for third-party developers brave enough to support the platform with applications. This is puzzling when one sees how well SQL Server and Microsoft’s other server technologies have thrived in recent years, under good leadership from a management that understands the technology. The same just hasn’t been true for some of the consumer products. In consequence, iPads and Android tablets have already exposed diehard Windows users, for the first time, to an alternative GUI for consumer Tablet PCs, and the comparisons aren’t always in Windows’ favour. Nokia’s problem is obvious: Android’s meteoric rise. Android now has 33% of the worldwide market for smartphones, while the market share of Nokia’s Symbian has dropped from 44% to 31%. As details of the agreement emerge, it would seem that Nokia will bring a great deal of expertise, such as imaging and Nokia Maps, to Windows Phone that should make it more competitive. It is wrong to assume that Nokia’s decline will continue: the shock of Android’s sudden rise could be enough to sting them back to their previous form, and they have Microsoft’s huge resources and marketing clout to help them. For the sake of the whole Windows stack, I really hope the alliance succeeds.

    Read the article

  • Java Spotlight Episode 102: Freescale on Embedded Java and Java Embedded @ JavaOne

    - by Roger Brinkley
    An interview with Michael O'Donnell of Freescale on Embedded Java and Embedded Java @ JavaOne. Part of this podcast was recorded live at the JavaOne 2012 Glassfish Party at the Thirsty Bear. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes News Oracle Java ME Embedded 3.2 Java Embedded Server 7.0 Events Oct 3-4, Java Embedded @ JavaONE, San Francisco Oct 15-17, JAX London Oct 30-Nov 1, Arm TechCon, Santa Clara Oct 22-23, Freescale Technology Forum - Japan, Tokyo Oct 31, JFall, Netherlands Nov 2-3, JMagreb, Morocco Nov 13-17, Devoxx, Belgium Feature InterviewFreescale is the global leader in embedded processing solutions, advancing the automotive, consumer, industrial and networking markets. From microprocessors and microcontrollers to sensors, analog ICs and connectivity – our technologies are the foundation to the innovations that make our world greener, safer, healthier and more connected. Michael O'Donnell, is the Director of Software Ecosystem Alliances. The upcoming Freescale Technology Forum - Japan in Tokyo, Japan is an excellent way for developers to learn more about Freescale and Java. What’s Cool Glassfish Party - 6th year Geek Bike Ride

    Read the article

  • Critical Patch Update for Oracle Fusion Middleware - CPU October 2013

    - by Daniel Mortimer
    The latest Critical Patch Update (CPU) has been released for Oracle products. Start your reading here Critical Patch Updates, Security Alerts and Third Party Bulletin  This is the home page containing links to all "Critical Patch Updates" released to date, along with sections detailing  Security Alerts  Third Party Bulletin Public Vulnerabilities Fixed Policies Reporting Security Vulnerabilities On this page you will find the link to the Oracle Critical Patch Update Advisory - October 2013 The advisory lists the support documents that cover the patch availability for all Oracle products. For Oracle Fusion Middleware, go to: Patch Set Update and Critical Patch Update October 2013 Availability Document [ID 1571391.1] If you are hit any unexpected errors when applying the CPU patches, check out the known issues documented in these two support documents. Critical Patch Update October 2013 Oracle Fusion Middleware Known Issues  [ID 1571369.1] Critical Patch Update October 2013 Database Known Issues [ID 1571653.1] And lastly, for an informal summary of what the Critical Patch Update fixes, check out the blog posts by "Oracle Software Security Assurance" team October 2013 Critical Patch Update Released

    Read the article

  • A Panorama of JavaOne Latin America

    - by reza_rahman
    As you know, JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. It was a resounding success with a great vibe, excellent technical content and numerous world class speakers, both local and international. Various folks like Tori Wieldt, Steve Chin, Arun Gupta, Bruno Borges and myself looked at the conference from slightly different colored lenses. It's interesting to put them all together in a panoramatic collage: Tori wrote about the Sao Paulo Geek Bike Ride held the Saturday before the conference here (enjoy the photos and video). She also discusses the keynotes in great detail here. Steve looked at it from the viewpoint of someome instrumental to putting the event together. Read his thoughts here (he has more geek bike ride photos as well as material for his JavaFX/HTML 5 talk). Arun had a more holistic view of the conference. He covers the geek bike ride, the GlassFish party (organized by Bruno Borges), his Java EE talks, and more. Check out the cool photos as well as the technical material. Bruno provides the critical local perspective in his 7 reasons you had to be at JavaOne Latin America 2012. He discusses the OTN Lounge, the hands-on-lab, the Java community keynote, Java EE technical sessions and of course the GlassFish party! I covered the GlassFish booth, the lab and my technical sessions (as well as Sao Paulo's lively metal underground) here.

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Recommended Method to Watch Amazon Prime using Ubuntu 14.04 LTS

    - by Kurt Sanger
    I realize that Hal is no longer in the Ubuntu Software Center for Ubuntu 14.04 and it is only available from a third party at this time. But I would like to know what Ubuntu's plans are for integrating DRM into Linux? Especially with Amazon's integration into the search tool, one would hope that they would make it easier for their Amazon Prime customers to watch Instant Videos. Is the repository for getting Hal for 13.10 safe for use? What will that break if I install it onto 14.04? Or do we need to find another OS that has DRM built into it? If Hal is okay to add to the OS using a third party repo, then why doesn't Ubuntu Software Center support it too? I imagine that Amazon's contract with the video copyright holders requires that they have some protection on electronically distributed media. I also imagine that getting Amazon to change is much harder than getting a bunch of software engineers to fix Ubuntu. Unless they don't want too. At which point Ubuntu isn't really a complete OS. Very disappointing. In general the ease of use of Ubuntu, the software center, and the large variety of applications was alluring. But breaking DRM wasn't a great idea. Can't wait to see what fails in our next update. Please tell us that there is a plan that is going to work in our future.

    Read the article

  • Referrer Strings

    - by WernerCD
    I'm not exactly sure how to ask this, but here I go. I work for a company that provides IT services. Our main costumer has a website developed by WebCompany.com, and I act as a maintainer/middle-man for this website for our customer. What's happening is that on the front page there is a drop-down-menu with links. One link goes to a third party website - ThirdParty.com. This third party website automatically logs you in based on your IP or your referrer string. Homepage ThirdParty.com Logged in via IP or referrer string. What's happening... is Website.com is passing along inconsistent referrer strings. I've asked them to look into it and they say it's not their fault, referrer strings are handled at the browser. ThirdParty.com is actually nice enough to have IP and referrer string listed on the front page for users not logged in. So... how can I trace the referrer string so that I can figure out why http://Website.com's front page link to ThirdParty.com is showing a referrer as "None" or "Thirdparty.com" instead of "ThirdParty.com"? Their website is build in PHP. Can you force all links on a website to list "referrer" as "http://website.com/"? instead of "http://website.com/portal/1" or "http://website.com/page3"?

    Read the article

  • What do YOU want to see in a SharePoint jQuery Session?

    - by Mark Rackley
    Hey party people. So, as you have probably realized by now, I’ve been using quite a bit of jQuery with SharePoint. It’s pretty amazing what you can actually accomplish with a little stubbornness and some guidance from the gurus. Well, it looks like I’ll be putting together a SharePoint jQuery session that I will be presenting at a few conferences. This is such a big and broad topic I could speak on it for hours! So, I need YOUR assistance to help me narrow down what I’ll be focusing on. Some ideas I have are: How to even get started; how to set up SharePoint to work with jQuery What third party libraries exist out there that integrate well with SharePoint How to interact with default SharePoint forms and jQuery (cascading dropdowns, disabling fields, etc..) What is SPServices and how can you use it When should you NOT use jQuery What do YOU want to see though? This session is for YOU guys, not for me. Please take a moment to leave a comment below and let me know what you would like to see and learn. Thanks, and I look forward to seeing you in my sessions!! Mark

    Read the article

  • Code execution time out occationally

    - by Athul k Surendran
    I am working on an e-commerce website. There is a case where I need to fetch the whole data in database through third-party API and send it to an indexing engine. This third-party API has many functions like getproducts, getproductprice, etc., and each of that functions will return the data in XML format. From there I will take charge, I will use various API calls and will handle the XML data with XSLT. And will write to a CSV file. This file will be uploaded to an Indexing engine. Right now I have details of 8000 products to feed the engine, and almost all time the this process takes about 15 min to complete, and sometimes fails. I can't find a better solution for this. I am thinking about handling the XML data in C# itself and get rid of XSLT. As I think, XSLT is far slower than C#. Is it a good Idea? Or what else I can do to solve this issue?

    Read the article

  • JavaOne Countdown, Are you ready?

    - by Angela Caicedo
    This is a great time of the year!  Not only does the weather start cooling down a bit, but it's time to get ready for JavaOne 2012.  It feels so long since my last JavaOne (last year I missed it because I was on a mom duty), so this year I couldn't be happier to be this close to the action again.  Have you ever been at JavaOne?  There are a million great reasons to love JavaOne, and the most important for me is the atmosphere of the conference: The Java community is there, and Java is in the air! This year we have more than 450 sessions, and there are HOLs (Hands on labs) to get your hands dirty with code.  In addition, there will be very cool demos, an exhibition hall. and a DEMOground.  During the whole time, you will have the opportunity to interact with the speakers, discuss topics and concerns, and even have a drink! Oh yes, I almost forgot, there will be lots of fun even apart from the technology!  For example there will be a Geek Bike Ride, a Thirsty Bear party, and the Appreciation Party with Pearl Jam and Kings of Leon.  How can this get any better! So, are you ready yet?  Have you registered?  If not, just follow this "Register for JavaOne" link and we'll see you there! P.S.  Little known fact: If you are a student you can get your pass for free!!!

    Read the article

  • Static pages for large photo album

    - by Phil P
    I'm looking for advice on software for managing a largish photo album for a website. 2000+ pictures, one-time drop (probably). I normally use MarginalHack's album, which does what I want: pre-generate thumbnails and HTML for the pictures, so I can serve without needing a dynamic run-time, so there's less attack surface to worry about. However, it doesn't handle pagination or the like, so it's unwieldy for this case. This is a one-time drop for pictures from a wedding, with a shared usercode/password for distribution to the guests; I don't wish to put the pictures in a third-party hosting environment. I don't wish to use PHP, simply because that's another run-time to worry about, I might relent and use something dynamic if it's Python or Perl based (as I can maintain things written in those). I currently have: Apache serving static files, Album-generated, some sub-directories to divide up the content to be a little more manageable. Something like Album but with pagination already handled would be great, but I'm willing to have something a little more dynamic, if it lets people comment or caption and store the extra data in something like an sqlite DB. I'd want something light-weight, not a full-blown CMS with security updates every three months. I don't want to upload pictures of other peoples' children into a third-party free service where I don't know what the revenue model is. (For my site: revenue is none, costs out of pocket). Existing server hosting is *nix, Apache, some WSGI. Client-side I have MacOS. Any advice?

    Read the article

  • How to choose a server side language / framework

    - by pllee
    I am trying to come up with a list / ranking system on determining which server language to choose for a particular website. Assume that familiarity with a certain language is not important and the implementation can be done in any language. Here are some things that might be important but I am not sure how to rank them : Maintainability. Libraries. For example, Memcached and NoSql support right out the box would be really nice addition to a particular framework. 3rd party SDK's. For example, if I need Paypal on my site they openly provide SDK's for all senarios in Java, PHP and .Net. If I choose Django I would have to rely on 3rd party libraries that are don't support everything and are not officially maintained. Would that be dealbreaker for Django? Performance This one is tricky to put on a generic list because it can be a deal breaker but for many websites performance will not be an issue that the language/framework is responsible for. Cost (hosting, open source). edit - Any reason for the votes to close? I didn't see any duplicates mentioned and the question should not drum up a flame war.

    Read the article

  • How can I test linkable/executable files that require re-hosting or retargeting?

    - by hagubear
    Due to data protection, I cannot discuss fine details of the work itself so apologies PROBLEM CASE Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post-linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. UPDATE I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,

    Read the article

  • Java Cloud Service Integration to REST Service

    - by Jani Rautiainen
    Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance. This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance. In this article a custom application integrating with REST service will be implemented. We will use REST services provided by Taleo as an example; however the same approach will work with any REST service. In this example the data from the REST service is used to populate a dynamic table. Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration.Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source.The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the "Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud" link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guideFor details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Access to a local database The database associated with the JCS instance cannot be connected to with JDBC.  Since creating ADFbc business component requires a JDBC connection we will need access to a local database. 3rd party libraries This example will use some 3rd party libraries for implementing the REST service call and processing the input / output content. Other libraries may also be used, however these are tested to work. Jersey 1.x Jersey library will be used as a client to make the call to the REST service. JCS documentation for supported specifications states: Java API for RESTful Web Services (JAX-RS) 1.1 So Jersey 1.x will be used. Download the single-JAR Jersey bundle; in this example Jersey 1.18 JAR bundle is used. Json-simple Jjson-simple library will be used to process the json objects. Download the  JAR file; in this example json-simple-1.1.1.jar is used. Accessing data in Taleo Before implementing the application it is beneficial to familiarize oneself with the data in Taleo. Easiest way to do this is by using a RESTClient on your browser. Once added to the browser you can access the UI: The client can be used to call the REST services to test the URLs and data before adding them into the application. First derive the base URL for the service this can be done with: Method: GET URL: https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/<company name> The response will contain the base URL to be used for the service calls for the company. Next obtain authentication token with: Method: POST URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/login?orgCode=<company>&userName=<user name>&password=<password> The response includes an authentication token that can be used for few hours to authenticate with the service: {   "response": {     "authToken": "webapi26419680747505890557"   },   "status": {     "detail": {},     "success": true   } } To authenticate the service calls navigate to "Headers -> Custom Header": And add a new request header with: Name: Cookie Value: authToken=webapi26419680747505890557 Once authentication token is defined the tool can be used to invoke REST services; for example: Method: GET URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/object/candidate/search.xml?status=16 This data will be used on the application to be created. For details on the Taleo REST services refer to the Taleo Business Edition REST API Guide. Create Application First Fusion Web Application is created and configured. Start JDeveloper and click "New Application": Application Name: JcsRestDemo Application Package Prefix: oracle.apps.jcs.test Application Template: Fusion Web Application (ADF) Configure Local Cloud Connection Follow the steps documented in the "Java Cloud Service ADF Web Application" article to configure a local database connection needed to create the ADFbc objects. Configure Libraries Add the 3rd party libraries into the class path. Create the following directory and copy the jar files into it: <JDEV_USER_HOME>/JcsRestDemo/lib  Select the "Model" project, navigate "Application -> Project Properties -> Libraries and Classpath -> Add JAR / Directory" and add the 2 3rd party libraries: Accessing Data from Taleo To access data from Taleo using the REST service the 3rd party libraries will be used. 2 Java classes are implemented, one representing the Candidate object and another for accessing the Taleo repository Candidate Candidate object is a POJO object used to represent the candidate data obtained from the Taleo repository. The data obtained will be used to populate the ADFbc object used to display the data on the UI. The candidate object contains simply the variables we obtain using the REST services and the getters / setters for them: Navigate "New -> General -> Java -> Java Class", enter "Candidate" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content: import oracle.jbo.domain.Number; public class Candidate { private Number candId; private String firstName; private String lastName; public Candidate() { super(); } public Candidate(Number candId, String firstName, String lastName) { super(); this.candId = candId; this.firstName = firstName; this.lastName = lastName; } public void setCandId(Number candId) { this.candId = candId; } public Number getCandId() { return candId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Taleo Repository Taleo repository class will interact with the Taleo REST services. The logic will query data from Taleo and populate Candidate objects with the data. The Candidate object will then be used to populate the ADFbc object used to display data on the UI. Navigate "New -> General -> Java -> Java Class", enter "TaleoRepository" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import com.sun.jersey.api.client.Client; import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource; import com.sun.jersey.core.util.MultivaluedMapImpl; import java.io.StringReader; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import oracle.jbo.domain.Number; import org.json.simple.JSONArray; import org.json.simple.JSONObject; import org.json.simple.parser.JSONParser; /** * This class interacts with the Taleo REST services */ public class TaleoRepository { /** * Connection information needed to access the Taleo services */ String _company = null; String _userName = null; String _password = null; /** * Jersey client used to access the REST services */ Client _client = null; /** * Parser for processing the JSON objects used as * input / output for the services */ JSONParser _parser = null; /** * The base url for constructing the REST URLs. This is obtained * from Taleo with a service call */ String _baseUrl = null; /** * Authentication token obtained from Taleo using a service call. * The token can be used to authenticate on subsequent * service calls. The token will expire in 4 hours */ String _authToken = null; /** * Static url that can be used to obtain the url used to construct * service calls for a given company */ private static String _taleoUrl = "https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/"; /** * Default constructor for the repository * Authentication details are passed as parameters and used to generate * authentication token. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * * @param company the company for which the service calls are made * @param userName the user name to authenticate with * @param password the password to authenticate with. */ public TaleoRepository(String company, String userName, String password) { super(); _company = company; _userName = userName; _password = password; _client = Client.create(); _parser = new JSONParser(); _baseUrl = getBaseUrl(); } /** * This obtains the base url for a company to be used * to construct the urls for service calls * @return base url for the service calls */ private String getBaseUrl() { String result = null; if (null != _baseUrl) { result = _baseUrl; } else { try { String company = _company; WebResource resource = _client.resource(_taleoUrl + company); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("URL"); } catch (Exception ex) { ex.printStackTrace(); } } return result; } /** * Generates authentication token, that can be used to authenticate on * subsequent service calls. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * @return authentication token that can be used to authenticate on * subsequent service calls */ private String login() { String result = null; try { MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("orgCode", _company); formData.add("userName", _userName); formData.add("password", _password); WebResource resource = _client.resource(_baseUrl + "login"); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).post(ClientResponse.class, formData); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("authToken"); } catch (Exception ex) { throw new RuntimeException("Unable to login ", ex); } if (null == result) throw new RuntimeException("Unable to login "); return result; } /** * Releases a authentication token. Each call to login must be followed * by call to logout after the processing is done. This is required as * the tokens are limited to 20 per user and if not released the tokens * will only expire after 4 hours. * @param authToken */ private void logout(String authToken) { WebResource resource = _client.resource(_baseUrl + "logout"); resource.header("cookie", "authToken=" + authToken).post(ClientResponse.class); } /** * This method is used to obtain a list of candidates using a REST * service call. At this example the query is hard coded to query * based on status. The url constructed to access the service is: * <_baseUrl>/object/candidate/search.xml?status=16 * @return List of candidates obtained with the service call */ public List<Candidate> getCandidates() { List<Candidate> result = new ArrayList<Candidate>(); try { // First login, note that in finally block we must have logout _authToken = "authToken=" + login(); /** * Construct the URL, the resulting url will be: * <_baseUrl>/object/candidate/search.xml?status=16 */ MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("status", "16"); JSONArray searchResults = (JSONArray)getTaleoResource("object/candidate/search", "searchResults", formData); /** * Process the results, the resulting JSON object is something like * this (simplified for readability): * * { * "response": * { * "searchResults": * [ * { * "candidate": * { * "candId": 211, * "firstName": "Mary", * "lastName": "Stochi", * logic here will find the candidate object(s), obtain the desired * data from them, construct a Candidate object based on the data * and add it to the results. */ for (Object object : searchResults) { JSONObject temp = (JSONObject)object; JSONObject candidate = (JSONObject)findObject(temp, "candidate"); Long candIdTemp = (Long)candidate.get("candId"); Number candId = (null == candIdTemp ? null : new Number(candIdTemp)); String firstName = (String)candidate.get("firstName"); String lastName = (String)candidate.get("lastName"); result.add(new Candidate(candId, firstName, lastName)); } } catch (Exception ex) { ex.printStackTrace(); } finally { if (null != _authToken) logout(_authToken); } return result; } /** * Convenience method to construct url for the service call, invoke the * service and obtain a resource from the response * @param path the path for the service to be invoked. This is combined * with the base url to construct a url for the service * @param resource the key for the object in the response that will be * obtained * @param parameters any parameters used for the service call. The call * is slightly different depending whether parameters exist or not. * @return the resource from the response for the service call */ private Object getTaleoResource(String path, String resource, MultivaluedMap<String, String> parameters) { Object result = null; try { WebResource webResource = _client.resource(_baseUrl + path); ClientResponse response = null; if (null == parameters) response = webResource.header("cookie", _authToken).get(ClientResponse.class); else response = webResource.queryParams(parameters).header("cookie", _authToken).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); result = findObject(jsonObject, resource); } catch (Exception ex) { ex.printStackTrace(); } return result; } /** * Convenience method to recursively find a object with an key * traversing down from a given root object. This will traverse a * JSONObject / JSONArray recursively to find a matching key, if found * the object with the key is returned. * @param root root object which contains the key searched for * @param key the key for the object to search for * @return the object matching the key */ private Object findObject(Object root, String key) { Object result = null; if (root instanceof JSONObject) { JSONObject rootJSON = (JSONObject)root; if (rootJSON.containsKey(key)) { result = rootJSON.get(key); } else { Iterator children = rootJSON.entrySet().iterator(); while (children.hasNext()) { Map.Entry entry = (Map.Entry)children.next(); Object child = entry.getValue(); if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } } else if (root instanceof JSONArray) { JSONArray rootJSON = (JSONArray)root; for (Object child : rootJSON) { if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } return result; } }   Creating Business Objects While JCS application can be created without a local database, the local database is required when using ADFbc objects even if database objects are not referred. For this example we will create a "Transient" view object that will be programmatically populated based the data obtained from Taleo REST services. Creating ADFbc objects Choose the "Model" project and navigate "New -> Business Tier : ADF Business Components : View Object". On the "Initialize Business Components Project" choose the local database connection created in previous step. On Step 1 enter "JcsRestDemoVO" on the "Name" and choose "Rows populated programmatically, not based on query": On step 2 create the following attributes: CandId Type: Number Updatable: Always Key Attribute: checked Name Type: String Updatable: Always On steps 3 and 4 accept defaults and click "Next".  On step 5 check the "Application Module" checkbox and enter "JcsRestDemoAM" as the name: Click "Finish" to generate the objects. Populating the VO To display the data on the UI the "transient VO" is populated programmatically based on the data obtained from the Taleo REST services. Open the "JcsRestDemoVOImpl.java". Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import java.sql.ResultSet; import java.util.List; import java.util.ListIterator; import oracle.jbo.server.ViewObjectImpl; import oracle.jbo.server.ViewRowImpl; import oracle.jbo.server.ViewRowSetImpl; // --------------------------------------------------------------------- // --- File generated by Oracle ADF Business Components Design Time. // --- Tue Feb 18 09:40:25 PST 2014 // --- Custom code may be added to this class. // --- Warning: Do not modify method signatures of generated methods. // --------------------------------------------------------------------- public class JcsRestDemoVOImpl extends ViewObjectImpl { /** * This is the default constructor (do not remove). */ public JcsRestDemoVOImpl() { } @Override public void executeQuery() { /** * For some reason we need to reset everything, otherwise * 2nd entry to the UI screen may fail with * "java.util.NoSuchElementException" in createRowFromResultSet * call to "candidates.next()". I am not sure why this is happening * as the Iterator is new and "hasNext" is true at the point * of the execution. My theory is that since the iterator object is * exactly the same the VO cache somehow reuses the iterator including * the pointer that has already exhausted the iterable elements on the * previous run. Working around the issue * here by cleaning out everything on the VO every time before query * is executed on the VO. */ getViewDef().setQuery(null); getViewDef().setSelectClause(null); setQuery(null); this.reset(); this.clearCache(); super.executeQuery(); } /** * executeQueryForCollection - overridden for custom java data source support. */ protected void executeQueryForCollection(Object qc, Object[] params, int noUserParams) { /** * Integrate with the Taleo REST services using TaleoRepository class. * A list of candidates matching a hard coded query is obtained. */ TaleoRepository repository = new TaleoRepository(<company>, <username>, <password>); List<Candidate> candidates = repository.getCandidates(); /** * Store iterator for the candidates as user data on the collection. * This will be used in createRowFromResultSet to create rows based on * the custom iterator. */ ListIterator<Candidate> candidatescIterator = candidates.listIterator(); setUserDataForCollection(qc, candidatescIterator); super.executeQueryForCollection(qc, params, noUserParams); } /** * hasNextForCollection - overridden for custom java data source support. */ protected boolean hasNextForCollection(Object qc) { boolean result = false; /** * Determines whether there are candidates for which to create a row */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); result = candidates.hasNext(); /** * If all candidates to be created indicate that processing is done */ if (!result) { setFetchCompleteForCollection(qc, true); } return result; } /** * createRowFromResultSet - overridden for custom java data source support. */ protected ViewRowImpl createRowFromResultSet(Object qc, ResultSet resultSet) { /** * Obtain the next candidate from the collection and create a row * for it. */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); ViewRowImpl row = createNewRowForCollection(qc); try { Candidate candidate = candidates.next(); row.setAttribute("CandId", candidate.getCandId()); row.setAttribute("Name", candidate.getFirstName() + " " + candidate.getLastName()); } catch (Exception e) { e.printStackTrace(); } return row; } /** * getQueryHitCount - overridden for custom java data source support. */ public long getQueryHitCount(ViewRowSetImpl viewRowSet) { /** * For this example this is not implemented rather we always return 0. */ return 0; } } Creating UI Choose the "ViewController" project and navigate "New -> Web Tier : JSF : JSF Page". On the "Create JSF Page" enter "JcsRestDemo" as name and ensure that the "Create as XML document (*.jspx)" is checked.  Open "JcsRestDemo.jspx" and navigate to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1" and drag & drop the VO to the "<af:form> " as a "ADF Read-only Table": Accept the defaults in "Edit Table Columns". To execute the query navigate to to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1 -> Operations -> Execute" and drag & drop the operation to the "<af:form> " as a "Button": Deploying to JCS Follow the same steps as documented in previous article"Java Cloud Service ADF Web Application". Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsRestDemo-ViewController-context-root/faces/JcsRestDemo.jspx The UI displays a list of candidates obtained from the Taleo REST Services: Summary In this article we learned how to integrate with REST services using Jersey library in JCS. In future articles various other integration techniques will be covered.

    Read the article

  • 8 Backup Tools Explained for Windows 7 and 8

    - by Chris Hoffman
    Backups on Windows can be confusing. Whether you’re using Windows 7 or 8, you have quite a few integrated backup tools to think about. Windows 8 made quite a few changes, too. You can also use third-party backup software, whether you want to back up to an external drive or back up your files to online storage. We won’t cover third-party tools here — just the ones built into Windows. Backup and Restore on Windows 7 Windows 7 has its own Backup and Restore feature that lets you create backups manually or on a schedule. You’ll find it under Backup and Restore in the Control Panel. The original version of Windows 8 still contained this tool, and named it Windows 7 File Recovery. This allowed former Windows 7 users to restore files from those old Windows 7 backups or keep using the familiar backup tool for a little while. Windows 7 File Recovery was removed in Windows 8.1. System Restore System Restore on both Windows 7 and 8 functions as a sort of automatic system backup feature. It creates backup copies of important system and program files on a schedule or when you perform certain tasks, such as installing a hardware driver. If system files become corrupted or your computer’s software becomes unstable, you can use System Restore to restore your system and program files from a System Restore point. This isn’t a way to back up your personal files. It’s more of a troubleshooting feature that uses backups to restore your system to its previous working state. Previous Versions on Windows 7 Windows 7′s Previous Versions feature allows you to restore older versions of files — or deleted files. These files can come from backups created with Windows 7′s Backup and Restore feature, but they can also come from System Restore points. When Windows 7 creates a System Restore point, it will sometimes contain your personal files. Previous Versions allows you to extract these personal files from restore points. This only applies to Windows 7. On Windows 8, System Restore won’t create backup copies of your personal files. The Previous Versions feature was removed on Windows 8. File History Windows 8 replaced Windows 7′s backup tools with File History, although this feature isn’t enabled by default. File History is designed to be a simple, easy way to create backups of your data files on an external drive or network location. File History replaces both Windows 7′s Backup and Previous Versions features. Windows System Restore won’t create copies of personal files on Windows 8. This means you can’t actually recover older versions of files until you enable File History yourself — it isn’t enabled by default. System Image Backups Windows also allows you to create system image backups. These are backup images of your entire operating system, including your system files, installed programs, and personal files. This feature was included in both Windows 7 and Windows 8, but it was hidden in the preview versions of Windows 8.1. After many user complaints, it was restored and is still available in the final version of Windows 8.1 — click System Image Backup on the File History Control Panel. Storage Space Mirroring Windows 8′s Storage Spaces feature allows you to set up RAID-like features in software. For example, you can use Storage Space to set up two hard disks of the same size in a mirroring configuration. They’ll appear as a single drive in Windows. When you write to this virtual drive, the files will be saved to both physical drives. If one drive fails, your files will still be available on the other drive. This isn’t a good long-term backup solution, but it is a way of ensuring you won’t lose important files if a single drive fails. Microsoft Account Settings Backup Windows 8 and 8.1 allow you to back up a variety of system settings — including personalization, desktop, and input settings. If you’re signing in with a Microsoft account, OneDrive settings backup is enabled automatically. This feature can be controlled under OneDrive > Sync settings in the PC settings app. This feature only backs up a few settings. It’s really more of a way to sync settings between devices. OneDrive Cloud Storage Microsoft hasn’t been talking much about File History since Windows 8 was released. That’s because they want people to use OneDrive instead. OneDrive — formerly known as SkyDrive — was added to the Windows desktop in Windows 8.1. Save your files here and they’ll be stored online tied to your Microsoft account. You can then sign in on any other computer, smartphone, tablet, or even via the web and access your files. Microsoft wants typical PC users “backing up” their files with OneDrive so they’ll be available on any device. You don’t have to worry about all these features. Just choose a backup strategy to ensure your files are safe if your computer’s hard disk fails you. Whether it’s an integrated backup tool or a third-party backup application, be sure to back up your files.

    Read the article

  • In the Groove: PASS Board Year 1, Q3

    - by Denise McInerney
    It's nine months into my first year on the PASS Board and I feel like I've found my rhythm. I've accomplished one of the goals I set out for the year and have made progress on others. Here's a recap of the last few months. Anti-Harassment Policy & Process Completed In April I began work on a Code of Conduct for the PASS Summit. The Board had several good discussions and various PASS members provided feedback. You can read more about that in this blog post. Since the document was focused on issues of harassment we renamed it the "Anti-Harassment Policy " and it was approved by the Board in August. The next step was to refine the guideliness and process for enforcement of the AHP. A subcommittee worked on this and presented an update to the Board at the September meeting. You can read more about that in this post, and you can find the process document here. Global Growth Expanding PASS' reach and making the organization relevant to SQL Server communities around the world has been a focus of the Board's work in 2012. We took the Global Growth initiative out to the community for feedback, and everyone on the Board participated, via Twitter chats, Town Hall meetings, feedback forums and in-person discussions. This community participation helped shape and refine our plans. Implementing the vision for Global Growth goes across all portfolios. The Virtual Chapters are well-positioned to help the organization move forward in this area. One outcome of the Global Growth discussions with the community is the expansion of two of the VCs from country-specific to language-specific. Thanks to the leadership in Brazil & Mexico for taking the lead here. I look forward to continued success for the Portuguese- and Spanish-language Virtual Chapters. Together with the Global Chinese VC PASS is off to a good start in making the VC's truly global. Virtual Chapters The VCs continue to grow and expand. Volunteers recently rebooted the Azure and Virutalization VCs, and a new  Education VC will be launching soon. Every week VCs offer excellent free training on a variety of topics. It's the dedication of the VC leaders and volunteers that make all this possible and I thank them for it. Board meeting The Board had an in-person meeting in September in San Diego, CA.. As usual we covered a number of topics including governance changes to support Global Growth, the upcoming Summit, 2013 events and the (then) upcoming PASS election. Next Up Much of the last couple of months has been focused on preparing for the PASS Summit in Seattle Nov. 6-9. I'll be there all week;  feel free to stop me if you have a question or concern, or just to introduce yourself.  Here are some of the places you can find me: VC Leaders Meeting Tuesday 8:00 am the VC leaders will have a meeting. We'll review some of the year's highlights and talk about plans for the next year Welcome Reception The VCs will be at the Welcome Reception in the new VC Lounge. Come by, learn more about what the VCs have to offer and meet others who share your interests. Exceptional DBA Awards Party I'm looking forward to seeing PASS Women in Tech VC leader Meredith Ryan receive her award at this event sponsored by Red Gate Session Presentation I will be presenting a spotlight session entitled "Stop Bad Data in Its OLTP Tracks" on Wednesday at 3:00 p.m. Exhibitor Reception This reception Wednesday evening in the Expo Hall is a great opportunity to learn more about tools and solutions that can help you in your job. Women in Tech Luncheon This year marks the 10th WIT Luncheon at PASS. I'm honored to be on the panel with Stefanie Higgins, Kevin Kline, Kendra Little and Jen Stirrup. This event is on Thursday at 11:30. Community Appreciation Party Thursday evening don't miss this event thanking all of you for everthing you do for PASS and the community. This year we will be at the Experience Music Project and it promises to be a fun party. Board Q & A Friday  9:45-11:15  am the members of the Board will be available to answer your questions. If you have a question for us, or want to hear what other members are thinking about, come by room 401 Friday morning.

    Read the article

  • The first day of JavaOne is already over!

    - by delabassee
    In the past Sunday used to be a more relaxing day with ‘just’ some JavaOne activities going on. Sunday used to be a soft day to prepare yourself for an exhausting week. This is now over as JavaOne is expanding; Sunday is now an integral part of the conference. One of the side effect of this extra day is that some activities related to JavaOne and OpenWorld such as MySQL Connect are being push to start a day earlier on Saturday (can you spot the pattern here?). On the GlassFish front, Sunday was a very busy day! It started at the Moscone Center with the annual GlassFish Community Event where the Java EE 7 and GF 4 roadmaps were presented and discussed. During the event, different GlassFish users such as ZeroTurnaround (the JRebel guys), Grupo RBS and IDR Solutions shared their views on GF, why they like GF but also what could be improved. The event was also a forum for the GF community to exchange with some of the key Java EE / GlassFish Oracle Executives and the different GF team members. The Strategy keynote and the Technical keynote were held in the Masonic Auditorium later in the after-noon. Oracle executives have presented the plans for Java SE, Java FX and Java EE. As on-demand replays will be available soon, I will not summarize several hours of content but here are some personal takeaways from those keynotes. Modularity Modularity is a big deal. We know by now that Project Jigsaw will not be ready for Java SE 8 but in any case, it is already possible (and encouraged) to test Jigsaw today. In the future, Java EE plan to rely on the modularity features provided by Java SE, so Project Jigsaw is also relevant for Java EE developers. Shorter term, to cover some of the modular requirements, Java SE will adopt the approach that was used for Java EE 6 and the notion of Profiles. This approach does not define a module system per say; Profiles is a way to clearly define different subsets of Java SE to fulfill different needs (e.g. the full JRE is not required for a headless application). The introduction of different Profiles, from the Base profile (10mb) to the Full Profile (+50mb), has been proposed for Java SE 8. Embedded Embedded is a strong theme going forward for the Java Plaform. There is now a dedicated program : Java Embedded @ JavaOne Java by nature (e.g. platform independence, built-in security, ability easily talks to any back-end systems, large set of skills available on the market, etc.) is probably the most suited platform for the Internet of Things. You can quickly be up-to-speed and develop services and applications for that space just by using your current Java skills. All you need to start developing on ARM is a 35$ Raspberry Pi ARM board (25$ if you are cheap and can live without an ethernet connection) and the recently released JDK for Linux/ARM. Obviously, GlassFish runs on Raspberry Pi. If you wan to go further in the embedded space, you should take a look Java SE Embedded, an optimized, low footprint, Java environment that support the major embedded architectures (ARM, PPC and x86). Finally, Oracle has recently introduced Java Embedded Suite, a new solution that brings modern middleware capabilities to the embedded space. Java Embedded Suite is an optimized solution that leverage Java SE Embedded but also GlassFish, Jersey and JavaDB to deploy advanced value added capabilities (eg. sensor data filtering and) deeper in the network, closer to the devices. JavaFX JavaFX is going strong! Starting from Java SE 7u6, JavaFX is bundled with the JDK. JavaFX is now available for all the major desktop platforms (Windows, Linux and Mac OS X). JavaFX is now also available, in developer preview, for low end device running Linux/ARM. During the keynote, JavaFX was shown running on a Raspberry Pi! And as announced during the keynote, JavaFX should be fully open-sourced by the end of the year; contributions are welcome!. There is a strong momentum around JavaFX, it’s the ideal client solution for the Java platform. A client layer that works perfectly with GlassFish on the back-end. If you were not convince by JavaFX, it’s time to reconsider it! As an old Chinese proverb say “One tweet is worth a thousand words!” HTML5, Project Avatar and Java EE 7 HTML5 got a lot of airtime too, it was covered during the Java EE 7 section of the keynote. Some details about Project Avatar, Oracle’s incubator project for a TSA (Thin Server Architecture) solution, were diluted and shown during the keynote. On the tooling side, Project Easel running on NetBeans 7.3 beta was demo’ed, including a cool NetBeans debugging session running in Chrome! HTML 5, Project Avatar and Java EE 7 deserve separate posts... Feedback We need your feedback! There are many projects, JSRs and products cooking : GlassFish 4, Project Jigsaw, Concurrency Utilities for Java EE (JSR 236), OpenJFX, OpenJDK to name just a few. Those projects, those specifications will have a profound impact on the Java platform for the years to come! So if you have the opportunity, download, install, learn, tests them and give feedback! Remember, you can "Make the Future Java!" Finally, the traditional GlassFish Party at the Thirsty Bear concluded the first JavaOne day. This party is another place where the community can freely exchange with the GlassFish team in a more relaxed, more friendly (but sometime more noisy) atmosphere. Arun has posted a set of pictures to reflect the atmosphere of the keynotes and the GlassFish party. You can find more details on the others Java EE and GlassFish activities here.

    Read the article

  • How to build a great relationship with your colleagues

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When you start new job, you worry about your performance, about being able to do what the manager asks you to do, but you also worry about the relations with your colleagues. How will you get along with them? What if they don’t like you? Have you ever felt you’re „the new guy” and your colleagues have already their own way of talking one to each other, their own jokes? It’s a common feeling and can actually become stressful. I am Norbert, Middleware Presales Intern in Hungary and I’ve been working within Oracle for only 1 month. Joining such a big company has been a challenge from many perspectives. One of them was adapting with the environment and getting to know all my colleagues. You know it’s quite difficult to introduce yourself, to try to liaise with them and find some common topics, so I felt very lucky and comfortable when my manager introduced me to all of my colleagues. It was easier to accommodate and we basically we had a starting point for our discussions. We started to talk about what my position means, for how many years they’ve been within Oracle, other Oracle related topics, but also more personal stuff like what they do after work. Having this opportunity of talking with all of them helped me introduce myself in a proper way and actually I told them many things about myself. Networking wasn’t my best skill, but these first days were really helpful from a network point of view. What else can you do to get along with your colleagues? One second thing I consider as being really helpful in networking is asking work-related questions. For instance, when you don’t know how to do something or don’t understand it, asking one of your colleagues will also help you to make a connection with him and you could easily continue the discussion with some other topics which are more personal. It’s a very effective strategy and in a company like Oracle people are very willing to help you with your tasks and perform at a high level. If you see your colleagues going to lunch, you should join them. It will help you become part of their community, finding out what’s new in their lives, you’ll, step-by-step, take part in their conversations and be up to date with the hot topics they talk about. One other opportunity of becoming part of your colleagues’ community are the internal events. Subscribing to the local free time activities mailing list is very useful for finding out information about when they’re going out and have a drink or attending all sorts of events. For instance, this is how I’ve found out about a party within Oracle that most of the employees here attend. It’s a wonderful opportunity for chatting and make a stronger connection to some of them. How important is attending these events? Think about how much time you spend at work. You’d like to enjoy your work and the environment, so getting along with your colleagues is a nice thing to have. I recently attended a corporate party whose purpose was to facilitate the interaction and communication between employees. It was a real success and we had a lot of fun, especially because it was a costume party.  All the fancy dresses and funny clothes we wore made the atmosphere really enjoyable. It was easy to liaise with colleague with whom I had never interacted with before. There was a friendly spirit among us, chatting about personal stuff and about various pleasant things. Working in an international company is not an easy thing because you interact with many people and they have different styles, but all these opportunities of informal interaction are a good way to adapt to the new working environment.

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [concepts]

    - by Your DisplayName here!
    ACS v2 support two fundamental types of client identities– I like to call them “enterprise identities” (WS-*) and “web identities” (Google, LiveID, OpenId in general…). I also see two different “mind sets” when it comes to application design using the above identity types: Enterprise identities – often the fact that a client can present a token from a trusted identity provider means he is a legitimate user of the application. Trust relationships and authorization details have been negotiated out of band (often on paper). Web identities – the fact that a user can authenticate with Google et al does not necessarily mean he is a legitimate (or registered) user of an application. Typically additional steps are necessary (like filling out a form, email confirmation etc). Sometimes also a mixture of both approaches exist, for the sake of this post, I will focus on the web identity case. I got a number of questions how to implement the web identity scenario and after some conversations it turns out it is the old authentication vs. authorization problem that gets in the way. Many people use the IsAuthenticated property on IIdentity to make security decisions in their applications (or deny user=”?” in ASP.NET terms). That’s a very natural thing to do, because authentication was done inside the application and we knew exactly when the IsAuthenticated condition is true. Been there, done that. Guilty ;) The fundamental difference between these “old style” apps and federation is, that authentication is not done by the application anymore. It is done by a third party service, and in the case of web identity providers, in services that are not under our control (nor do we have a formal business relationship with these providers). Now the issue is, when you switch to ACS, and someone with a Google account authenticates, indeed IsAuthenticated is true – because that’s what he is! This does not mean, that he is also authorized to use the application. It just proves he was able to authenticate with Google. Now this obviously leads to confusion. How can we solve that? Easy answer: We have to deal with authentication and authorization separately. Job done ;) For many application types I see this general approach: Application uses ACS for authentication (maybe both enterprise and web identities, we focus on web identities but you could easily have a dual approach here) Application offers to authenticate (or sign in) via web identity accounts like LiveID, Google, Facebook etc. Application also maintains a database of its “own” users. Typically you want to store additional information about the user In such an application type it is important to have a unique identifier for your users (think the primary key of your user database). What would that be? Most web identity provider (and all the standard ACS v2 supported ones) emit a NameIdentifier claim. This is a stable ID for the client (scoped to the relying party – more on that later). Furthermore ACS emits a claims identifying the identity provider (like the original issuer concept in WIF). When you combine these two values together, you can be sure to have a unique identifier for the user, e.g.: Facebook-134952459903700\799880347 You can now check on incoming calls, if the user is already registered and if yes, swap the ACS claims with claims coming from your user database. One claims would maybe be a role like “Registered User” which can then be easily used to do authorization checks in the application. The WIF claims authentication manager is a perfect place to do the claims transformation. If the user is not registered, show a register form. Maybe you can use some claims from the identity provider to pre-fill form fields. (see here where I show how to use the Facebook API to fetch additional user properties). After successful registration (which may include other mechanisms like a confirmation email), flip the bit in your database to make the web identity a registered user. This is all very theoretical. In the next post I will show some code and provide a download link for the complete sample. More on NameIdentifier Identity providers “guarantee” that the name identifier for a given user in your application will always be the same. But different applications (in the case of ACS – different ACS namespaces) will see different name identifiers. This is by design to protect the privacy of users because identical name identifiers could be used to create “profiles” of some sort for that user. In technical terms they create the name identifier approximately like this: name identifier = Hash((Provider Internal User ID) + (Relying Party Address)) Why is this important to know? Well – when you change the name of your ACS namespace, the name identifiers will change as well and you will will lose your “connection” to your existing users. Oh an btw – never use any other claims (like email address or name) to form a unique ID – these can often be changed by users.

    Read the article

  • axis2 web service behave differently when tested with web service client or with local test class

    - by Stefano
    Hello I need to update a facade to some web service proxy classes to a third party web service, and expose them as a service. This for two reason : to maintain the same interface for all application that need to use the system : actually its migrating and there are a few differences in the third party ws (method names); and to expose a simplified interface. The third party has provided me with a manual and some pregenerated proxy classes to their service (the java file says generated with axis2 1,4) . I've used netbeans 6.8 and the axis2 plugin to create an axis2 service . This service contains the proxy classes and the facade class which instantiate the web service proxy and calls its method; the facade class is exposed as service. I've used axis2 1.4 (at beginnig and later 1.5 ) and tomcat 6.0. The first test i did was to call the facede methods from inside the project itself and it worked. Then i've created a new project with a jax-ws web service client to call my class deployed on axis2. At this point has happened two strange thing : In the axis2 services page has appeared the third party proxy class as if it were a new service (if i try to get the wsdl axis raises an error ). eg. the proxy interface is named WebServiceAPI (_stub is the concrete class) and , after the first call to my service , i find a new "WebServicesAPI1272968932531_1" service inside axis . The call obvoiusly fail i've began to sniff soap messages with wireshark and i've found they differs when using proxy classes direclty from my facade test class by the messages created after being deployed on axis. i've noticed they differs for the presence of the soap header in the failing message. any help would be greatly appreciated : maybe i messed up something, there might be some incompatibilities or version mistakes? below i've added the signature of the third party proxy, its impementation and the different soap messages: /* * WebServicesAPI.java * This file was auto-generated from WSDL * by the Apache Axis2 version: 1.4 Built on : Apr 26, 2008 (06:24:30 EDT) */ package com.ibm.eci.wsapi; public interface WebServicesAPI { public com.ibm.eci.wsapi.ArrayOfstring getWorkItemHistory( java.lang.String stateKey,java.lang.String logonID,com.ibm.eci.wsapi.RepoItemHandle workItemHandle) throws java.rmi.RemoteException,com.ibm.eci.wsapi.ExceptionException0; ...etc the concrete class is : /** * WebServicesAPIStub.java * * This file was auto-generated from WSDL * by the Apache Axis2 version: 1.4 Built on : Apr 26, 2008 (06:24:30 EDT) */ package com.ibm.eci.wsapi; /* * WebServicesAPIStub java implementation */ public class WebServicesAPIStub extends org.apache.axis2.client.Stub implements WebServicesAPI{ protected org.apache.axis2.description.AxisOperation[] _operations; ... public com.ibm.eci.wsapi.ArrayOfstring getWorkItemHistory( java.lang.String stateKey297,java.lang.String logonID298,com.ibm.eci.wsapi.RepoItemHandle workItemHandle299) throws java.rmi.RemoteException ,com.ibm.eci.wsapi.ExceptionException0{ org.apache.axis2.context.MessageContext _messageContext = null; try{ org.apache.axis2.client.OperationClient _operationClient = _serviceClient.createClient(_operations[0].getName()); _operationClient.getOptions().setAction("\"\""); _operationClient.getOptions().setExceptionToBeThrownOnSOAPFault(true); addPropertyToOperationClient(_operationClient,org.apache.axis2.description.WSDL2Constants.ATTR_WHTTP_QUERY_PARAMETER_SEPARATOR,"&"); // create a message context _messageContext = new org.apache.axis2.context.MessageContext(); // create SOAP envelope with that payload org.apache.axiom.soap.SOAPEnvelope env = null; com.ibm.eci.wsapi.GetWorkItemHistoryE dummyWrappedType = null; env = toEnvelope(getFactory(_operationClient.getOptions().getSoapVersionURI()), stateKey297, logonID298, workItemHandle299, dummyWrappedType, optimizeContent(new javax.xml.namespace.QName("http://wsapi.eci.ibm.com", "getWorkItemHistory"))); //adding SOAP soap_headers _serviceClient.addHeadersToEnvelope(env); // set the message context with that soap envelope _messageContext.setEnvelope(env); // add the message contxt to the operation client _operationClient.addMessageContext(_messageContext); //execute the operation client _operationClient.execute(true); org.apache.axis2.context.MessageContext _returnMessageContext = _operationClient.getMessageContext( org.apache.axis2.wsdl.WSDLConstants.MESSAGE_LABEL_IN_VALUE); org.apache.axiom.soap.SOAPEnvelope _returnEnv = _returnMessageContext.getEnvelope(); java.lang.Object object = fromOM( _returnEnv.getBody().getFirstElement() , com.ibm.eci.wsapi.GetWorkItemHistoryResponseE.class, getEnvelopeNamespaces(_returnEnv)); return getGetWorkItemHistoryResponse_return((com.ibm.eci.wsapi.GetWorkItemHistoryResponseE)object); ... the failing soap message (generated by jax-ws client to the axis deployed service) is : POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsa:To>http://n0611049:9083/vbr_wsapi/services/WebServicesAPI.Endpoint</wsa:To> <wsa:MessageID>urn:uuid:A31AD99897F9045E981272964443982</wsa:MessageID><wsa:Action>""</wsa:Action> </soapenv:Header> <soapenv:Body> <ns1:initializeProps xmlns:ns1="http://wsapi.eci.ibm.com"> <props><val>client.locale=it_IT</val> </props> </ns1:initializeProps> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 500 Internal Server Error Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Connection: Close Date: Tue, 04 May 2010 09:16:15 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing"> <wsa:Action>http://www.w3.org/2005/08/addressing/fault</wsa:Action> <wsa:RelatesTo>urn:uuid:A31AD99897F9045E981272964443982</wsa:RelatesTo> <wsa:FaultDetail> <wsa:ProblemAction> <wsa:Action>""</wsa:Action> </wsa:ProblemAction> </wsa:FaultDetail> </soapenv:Header> <soapenv:Body> <soapenv:Fault xmlns:wsa="http://www.w3.org/2005/08/addressing"> <faultcode>wsa:ActionNotSupported</faultcode> <faultstring>The [action] cannot be processed at the receiver.</faultstring> <detail /> </soapenv:Fault> </soapenv:Body> </soapenv:Envelope> the succesful call (generated by my local test class, not being deployed to axis yet) : POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns1:initializeProps xmlns:ns1="http://wsapi.eci.ibm.com"> <props> <val>client.locale=it_IT</val> </props> </ns1:initializeProps> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 200 OK Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Date: Tue, 04 May 2010 09:40:03 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <dlwmin:initializePropsResponse xmlns:dlwmin="http://wsapi.eci.ibm.com"> <return>e0e40cc51ceb0adf96c582bb6e047b3d0f</return> </dlwmin:initializePropsResponse> </soapenv:Body> </soapenv:Envelope> POST /vbr_wsapi/services/WebServicesAPI.Endpoint HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "" User-Agent: Axis2 Host: n0611049:9083 Transfer-Encoding: chunked <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <ns1:logon xmlns:ns1="http://wsapi.eci.ibm.com"> <stateKey>e0e40cc51ceb0adf96c582bb6e047b3d0f</stateKey> <systemID>----</systemID> <authBundle> <password>-----</password> <sealed>false</sealed> <username>---</username> </authBundle> </ns1:logon> </soapenv:Body> </soapenv:Envelope> HTTP/1.1 200 OK Content-Type: text/xml; charset=UTF-8 Content-Language: en-US Transfer-Encoding: chunked Date: Tue, 04 May 2010 09:40:21 GMT Server: WebSphere Application Server/7.0 <?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"> <soapenv:Body> <dlwmin:logonResponse xmlns:dlwmin="http://wsapi.eci.ibm.com"> <return>e0e40cc51ceb0adf96c582bb6e047b3d10</return> </dlwmin:logonResponse> </soapenv:Body> </soapenv:Envelope> ... goes on with other calls

    Read the article

  • Privilege Elevation only when and if required.

    - by Cameron Peters
    My application only very occasionally requires privilege elevation... I need to reference some 3rd party COM components that only work correctly when run as administrator. I would like my application to request privilege elevation only when it needs it... Generally, I don't want my application to run as administrator unless I need to use the 3rd party COM components. I see that CoCreateAsAdmin could potentially solve the problem, but the component author doesn't set up the required registry entries, and I'm not sure how to use CoCreateAsAdmin in C# and in conjuction with Runtime-Callable-Wrapper that is created by tlbimp. Another solution would be to spawn another process, but I have no experience with this yet... I don't want to create a completely separate application... I would be happy to create an assembly that runs in a separated elevated process if someone can show me how to make it work. Thanks...

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >