Search Results

Search found 18419 results on 737 pages for 'oracle bi publisher enterprise'.

Page 493/737 | < Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >

  • The Faces in the Crowdsourcing

    - by Applications User Experience
    By Jeff Sauro, Principal Usability Engineer, Oracle Imagine having access to a global workforce of hundreds of thousands of people who can perform tasks or provide feedback on a design quickly and almost immediately. Distributing simple tasks not easily done by computers to the masses is called "crowdsourcing" and until recently was an interesting concept, but due to practical constraints wasn't used often. Enter Amazon.com. For five years, Amazon has hosted a service called Mechanical Turk, which provides an easy interface to the crowds. The service has almost half a million registered, global users performing a quarter of a million human intelligence tasks (HITs). HITs are submitted by individuals and companies in the U.S. and pay from $.01 for simple tasks (such as determining if a picture is offensive) to several dollars (for tasks like transcribing audio). What do we know about the people who toil away in this digital crowd? Can we rely on the work done in this anonymous marketplace? A rendering of the actual Mechanical Turk (from Wikipedia) Knowing who is behind Amazon's Mechanical Turk is fitting, considering the history of the actual Mechanical Turk. In the late 1800's, a mechanical chess-playing machine awed crowds as it beat master chess players in what was thought to be a mechanical miracle. It turned out that the creator, Wolfgang von Kempelen, had a small person (also a chess master) hiding inside the machine operating the arms to provide the illusion of automation. The field of human computer interaction (HCI) is quite familiar with gathering user input and incorporating it into all stages of the design process. It makes sense then that Mechanical Turk was a popular discussion topic at the recent Computer Human Interaction usability conference sponsored by the Association for Computing Machinery in Atlanta. It is already being used as a source for input on Web sites (for example, Feedbackarmy.com) and behavioral research studies. Two papers shed some light on the faces in this crowd. One paper tells us about the shifting demographics from mostly stay-at-home moms to young men in India. The second paper discusses the reliability and quality of work from the workers. Just who exactly would spend time doing tasks for pennies? In "Who are the crowdworkers?" University of California researchers Ross, Silberman, Zaldivar and Tomlinson conducted a survey of Mechanical Turk worker demographics and compared it to a similar survey done two years before. The initial survey reported workers consisting largely of young, well-educated women living in the U.S. with annual household incomes above $40,000. The more recent survey reveals a shift in demographics largely driven by an influx of workers from India. Indian workers went from 5% to over 30% of the crowd, and this block is largely male (two-thirds) with a higher average education than U.S. workers, and 64% report an annual income of less than $10,000 (keeping in mind $1 has a lot more purchasing power in India). This shifting demographic certainly has implications as language and culture can play critical roles in the outcome of HITs. Of course, the demographic data came from paying Turkers $.10 to fill out a survey, so there is some question about both a self-selection bias (characteristics which cause Turks to take this survey may be unrepresentative of the larger population), not to mention whether we can really trust the data we get from the crowd. Crowds can perform tasks or provide feedback on a design quickly and almost immediately for usability testing. (Photo attributed to victoriapeckham Flikr While having immediate access to a global workforce is nice, one major problem with Mechanical Turk is the incentive structure. Individuals and companies that deploy HITs want quality responses for a low price. Workers, on the other hand, want to complete the task and get paid as quickly as possible, so that they can get on to the next task. Since many HITs on Mechanical Turk are surveys, how valid and reliable are these results? How do we know whether workers are just rushing through the multiple-choice responses haphazardly answering? In "Are your participants gaming the system?" researchers at Carnegie Mellon (Downs, Holbrook, Sheng and Cranor) set up an experiment to find out what percentage of their workers were just in it for the money. The authors set up a 30-minute HIT (one of the more lengthy ones for Mechanical Turk) and offered a very high $4 to those who qualified and $.20 to those who did not. As part of the HIT, workers were asked to read an email and respond to two questions that determined whether workers were likely rushing through the HIT and not answering conscientiously. One question was simple and took little effort, while the second question required a bit more work to find the answer. Workers were led to believe other factors than these two questions were the qualifying aspect of the HIT. Of the 2000 participants, roughly 1200 (or 61%) answered both questions correctly. Eighty-eight percent answered the easy question correctly, and 64% answered the difficult question correctly. In other words, about 12% of the crowd were gaming the system, not paying enough attention to the question or making careless errors. Up to about 40% won't put in more than a modest effort to get paid for a HIT. Young men and those that considered themselves in the financial industry tended to be the most likely to try to game the system. There wasn't a breakdown by country, but given the demographic information from the first article, we could infer that many of these young men come from India, which makes language and other cultural differences a factor. These articles raise questions about the role of crowdsourcing as a means for getting quick user input at low cost. While compensating users for their time is nothing new, the incentive structure and anonymity of Mechanical Turk raises some interesting questions. How complex of a task can we ask of the crowd, and how much should these workers be paid? Can we rely on the information we get from these professional users, and if so, how can we best incorporate it into designing more usable products? Traditional usability testing will still play a central role in enterprise software. Crowdsourcing doesn't replace testing; instead, it makes certain parts of gathering user feedback easier. One can turn to the crowd for simple tasks that don't require specialized skills and get a lot of data fast. As more studies are conducted on Mechanical Turk, I suspect we will see crowdsourcing playing an increasing role in human computer interaction and enterprise computing. References: Downs, J. S., Holbrook, M. B., Sheng, S., and Cranor, L. F. 2010. Are your participants gaming the system?: screening mechanical turk workers. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 2399-2402. Link: http://doi.acm.org/10.1145/1753326.1753688 Ross, J., Irani, L., Silberman, M. S., Zaldivar, A., and Tomlinson, B. 2010. Who are the crowdworkers?: shifting demographics in mechanical turk. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 2863-2872. Link: http://doi.acm.org/10.1145/1753846.1753873

    Read the article

  • Defining Your Online Segmentation and Targeting Strategy

    - by Christie Flanagan
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A lot of times, companies will put online segmentation and targeting on the back burner because they don’t know where to start. Often, I’ve heard web managers say that their segments aren’t well understood yet, so they can’t really deliver personalized online experiences that are meaningful. This lack of complete understanding means that they don't really bother to try. But, I don’t think you necessarily need to have an elaborate segmentation and targeting strategy already in place to start delivering a more relevant online customer experience. Sometimes it helps to think of how segmentation and targeting might solve some of the challenges your sites visitors are currently experiencing on your web presence, rather than doing nothing and waiting until a fully baked segmentation strategy lands in your inbox.  For example, perhaps you have a broad and varied service offering that makes it difficult for site visitors to easily find the solutions that are most relevant for them.  How can segmentation and targeting help solve this problem?  Or maybe it’s like the airline I described in Monday’s post where the special deals featured on the home page are only relevant to site visitors from a couple of cities.  Couldn’t segmentation and targeting help them to highlight offers on their home page that are relevant to a larger share of their site visitors? Your early segmentation and targeting efforts do not need to be complicated.  There are simple ways to start delivering a more relevant online customer experience, even if you’re dealing with anonymous site visitors.  These include targeting content to site visitors based on: Referral: Deliver targeted content to your site visitors that is based on where they came from or the search term they used to find your site Behavior:  Deliver content to your site visitors that is related or similar to content they’ve clicked on already Location:  Deliver content your site visitors that is most relevant for their geographic location (this would solve that pesky airline home page problem described above) So as you can see, there really are some very simple ways in which you can start improving your online customer experience using very basic segmentation and targeting methods.  One thing to keep in mind as you start to define you segmentation and targeting strategy is that there are many different types of attributes or combinations of attributes upon which you can base your segmentation and targeting strategy.  In addition to referral, behavior and location, other attributes that you should consider are: Profile Information:  What profile information do you know about this customer already?  Perhaps they provided some information on their interests and preferences when they first registered with your site. Time:  What time is it and how does that impact what my site visitors are looking for or trying to do? Demographics: What are my site visitors’ ages, incomes or ethnicities? Which attributes you select to include in your segmentation strategy will depend on your unique business needs and objectives.  Attributes such as behavior or referral may not be the most important targeting criteria depending on your situation. For example, if you’re a newspaper you might know that certain visitors are sports fans based on their profile information.  You can create a segment for sports fans and target sports related content to that segment of your readership online.  Or perhaps, a reader is browsing stories that are related to politics; you can use that visitor’s behavior to assign him or her to a segment for those interested in politics. From there you can recommend more stories to that visitor based on their interest in politics. For an airline, the visitor’s location may be a more important attribute. By detecting the visitor’s location, you can assign them to an appropriate segment and then target special flights and offers to them based on their likely departure airport. As you can see, there are many practical ways that you can start improving the experience your customers receive on your web presence using fairly basic segmentation and targeting techniques. If you want to learn more about segmentation and targeting using Oracle’s web experience management solution, check out this helpful video that demonstrates these powerful capabilities in Oracle WebCenter Sites. ***** On Demand Webcast Featuring Brian Solis of Altimeter Group Trends such as the mobile web, social media, gamification and real-time are changing customer behavior and expectations. In this new environment, many businesses will struggle. Some will fall by the wayside, while others learn to adapt and thrive. Watch this on demand webcast with Altimeter Group digital analyst and author, Brian Solis, and discover what your organization needs to know about how to compete in the new era of Digital Darwinism. View now.

    Read the article

  • Sales & Operations Planning in the Cloud (Value Chain Planning) with JD Edwards

    - by Hartmut Wiese
    AVATA, a US based Oracle Partner with the EMEA Headquarter in Germany is offering a pre-integrated, cloud based integration with JD Edwards. It is a Sales & Operations Planning hub that enables companies to seamlessly plan across the entire organization via a dynamic, continuous and collaborative web-based Sales and Operations Planning process. There is a datasheet uploaded to the EMEA JD Edwards Partner Community workspace here which explains options and benefits and has contact details included as well. You need to be a member of this Community to access the workspace. Please register here.

    Read the article

  • Berkeley DB Java Edition 4.0.103 Available

    - by charles.lamb
    We'd like to let you know that JE 4.0.103 is now at http://www.oracle.com/technology/software/products/berkeley-db/je/index.html. The patch release contains both small features and bug fixes, many of which were prompted by feedback on this forum. Some items to note: New CacheMode values for more control over cache policies, and new statistics to enable better interpretation of caching behavior. These are just one initial part of our continuing work in progress to make JE caching more efficient. Fixes for proper cache utilization calculations when using the -XX:+UseCompressedOops JVM option. A variety of other bug fixes. There is no file format or API changes. As always, we encourage users to move promptly to this new release.

    Read the article

  • Two Sun Certification Exams To Retire August 1, 2010

    - by Paul Sorensen
    Effective August 1, 2010, Exam CX-310-400 ("Sun Certified Integrator for Identity Manager 7.1"), currently part of the "Sun Certified Integrator for Identity Manager 7.1" certification track, will be retired. We will also retire Exam CX-310-502 ("Sun Certified Java CAPS Integrator"), currently within the "Sun Certified Java CAPS Integrator" certification track. Both exams will remain available for registration and testing at Prometric Testing Centers through July 31, 2010.CREDENTIAL VALIDITYPlease note that that these credentials remain valid indefinitely for those holding the certifications. These retirements therefore have no effect on those who complete the certification requirements before August 1, 2010.QUICK LINKSRetiring Exams:Exam CX-310-400 "Sun Certified Integrator for Identity Manager 7.1"Exam CX-310-502 "Sun Certified Java CAPS Integrator" Certification Tracks:Sun Certified Integrator for Identity Manager 7.1Sun Certified Java CAPS IntegratorLearn more: Oracle Certification Retirements

    Read the article

  • OPN SPECIALIZED Webcasts

    - by Claudia Costa
    OPN Specialized Webcast Series for Partners For the EMEA region the webcasts start at 11:00 CET/10:00GMT. Each training session will run for approximately one hour and include live Q&A. "How to become Specialized in the Applications products portfolio," 25th May 2010,11:00 CET/10:00GMT. Click here for more information& registration. "How to become an OPN Specialized Reseller of Oracle's Sun SPARC Servers, Storage, Software and Services," 1st June 2010,11:00 CET/10:00GMT. Click here for more information& registration.  

    Read the article

  • Blog...Dead & New Address

    - by Derek Comingore
    Hi Folks! For a while I was attempting (probably the correct word) to keep both this blog as well as my SQL Server Magazine SQL Server BI blog current. Working on growing B.I. Voyage , speaking, writing, and blogging is quite the work load. And so this blog suffered as a result. THANK YOU all who read my blog here on SQLTeam.com, I intend to leave it live for those who might benefit from its content at a later date. My consolidated, single blog can be found at http://www.sqlmag.com/blogs/sqlserverbi.aspx . Cheers, Derek

    Read the article

  • Visio Stencils for ADF Faces Available on SampleCode

    - by juan.ruiz
    We are pleased to announce the launch of a new tool aiming to help visual designers and UI developers to build mockups for ADF applications outside of JDeveloper in a fast and quickly way. Available through samplecode.oracle.com, the ADF Visio stencils provide you with a large set of shapes, icons and layouts representing the most commonly used visual components. You can check out this video that shows you how to get you started using the Visio stencils. Let us know what do you think about the stencils on the JDeveloper Forum, your feedback is always important to us.

    Read the article

  • How these files can be accessed?

    - by harsh.singla
    The files can be accessed from every artifact, such as .bpel, .mplan, .task, .xsl, .wsdl etc., of the composite. 'oramds' protocol is used to access these files. You need to setup your adf-config.xml file in your dev environment or Jdeveloper to access these files from MDS. Here is the sample adf-config.xml. xmlns:sec="http://xmlns.oracle.com/adf/security/config" name="jdbc-url"/ name="metadata-path"/ credentialStoreLocation="../../src/META-INF/jps-config.xml"/ This adf-config.xml is located in directory named .adf/META-INF, which is in the application home of your project. Application home is the directory where .jws file of you application exists. Other than setting this file, you need not make any other changes in your project or composite to access MDS. After setting this up, you can create a new SOA-MDS connection in your Jdev. This enables you to have a resource pallet in which you can browse and choose the required file from MDS.

    Read the article

  • Hands-On Course Requirement: Applicability

    - by Paul Sorensen
    Hi Everyone,Recently there has been some confusion regarding the need for partner-related candidates to meet the hands-on course requirement for tracks that require it. I thought it would be wise to clarify this point on the blog. A recurring question that we have been getting is:Question: Do partner candidates need to meet a hands-on course requirement?Answer: Yes. In order for a person to become certified they must meet the certification requirements for the track that they are pursuing (as listed on the Oracle certification web site). Regardless of candidate type - partner, customer, employee, etc. - all of the certification requirements for a track must be met. Partner-related certification candidates must therefore meet the hands-on course requirement if the track requires it (i.e. they are not exempt from any of the requirements for any given certification).Thanks!QUICK LINKSOracle certification web siteHands-on Course Attendance Requirements

    Read the article

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • SOA 11g Technology Adapters – ECID Propagation

    - by Greg Mally
    Overview Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few. Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective. One of these challenges is how to correlate a logical business transaction across SOA component instances. This correlation is typically accomplished via the execution context ID (ECID), but we lose the ECID correlation when the business transaction spans technologies like FTP, database, and files. A new feature has been introduced in the Oracle adapter JCA framework to allow the propagation of the ECID. This feature is available in the forthcoming SOA Suite 11.1.1.7 (PS6). The basic concept of propagating the ECID is to identify somewhere in the payload of the message where the ECID can be stored. Then two Binding Properties, relating to the location of the ECID in the message, are added to either the Exposed Service (left-hand side of composite) or External Reference (right-hand side of composite). This will give the JCA framework enough information to either extract the ECID from or add the ECID to the message. In the scenario of extracting the ECID from the message, the ECID will be used for the new component instance. Where to Put the ECID When trying to determine where to store the ECID in the message, you basically have two options: Add a new optional element to your message schema. Leverage an existing element that is not used in your schema. The best scenario is that you are able to add the optional element to your message since trying to find an unused element will prove difficult in most situations. The schema will be holding the ECID value which looks something like the following: 11d1def534ea1be0:7ae4cac3:13b4455735c:-8000-00000000000002dc Configuring Composite Services/References Now that you have identified where you want the ECID to be stored in the message, the JCA framework needs to have this information as well. The two pieces of information that the framework needs relates to the message schema: The namespace for the element in the message. The XPath to the element in the message. To better understand this, let's look at an example for the following database table: When an Exposed Service is created via the Database Adapter Wizard in the composite, the following schema is created: For this example, the two Binding Properties we add to the ReadRow service in the composite are: <!-- Properties for the binding to propagate the ECID from the database table --> <property name="jca.ecid.nslist" type="xs:string" many="false">  xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadRow"</property> <property name="jca.ecid.xpath" type="xs:string" many="false">  /ns1:EcidPropagationCollection/ns1:EcidPropagation/ns1:ecid</property> Notice that the property called jca.ecid.nslist contains the targetNamespace defined in the schema and the property called jca.ecid.xpath contains the XPath statement to the element. The XPath statement also contains the appropriate namespace prefix (ns1) which is defined in the jca.ecid.nslist property. When the Database Adapter service reads a row from the database, it will retrieve the ECID value from the payload and remove the element from the payload. When the component instance is created, it will be associated with the retrieved ECID and the payload contains everything except the ECID element/value. The only time the ECID is visible is when it is stored safely in the resource technology like the database, a file, or a queue. Simple Database/File/JMS Example This section contains a simplified example of how the ECID can propagate through a database table, a file, and JMS queue. The composite for the example looks like the following: The flow of this example is as follows: Invoke database insert using the insertwithecidbpelprocess_client_ep Service. The InsertWithECIDBPELProcess adds a row to the database via the Database Adapter. The JCA Framework adds the ECID to the message prior to inserting. The ReadRow Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadRowBPELProcess is created and it is associated with the retried ECID. The ReadRowBPELProcess now writes the record to the file system via the File Adapter. The JCA Framework adds the ECID to the message prior to writing the message to file. The ReadFile Service retrieves the record from the file system and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadFileBPELProcess is created and it is associated with the retried ECID. The ReadFileBPELProcess now enqueues the message via the JMS Adapter. The JCA Framework adds the ECID to the message prior to enqueuing the message. The DequeueMessage Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of DequeueMessageBPELProcess is created and it is associated with the retried ECID. The logical flow ends. When viewing the Flow Trace in the Enterprise Manger, you will now see all the instances correlated via ECID: Please check back here when SOA Suite 11.1.1.7 is released for this example. With the example you can run it yourself and reinforce what has been shared in this blog via a hands-on experience. One final note: the contents of this blog may be included in the official SOA Suite 11.1.1.7 documentation, but you will still need to come here to get the example.

    Read the article

  • Managing the Transition to IFRS

    As countries around the world announce and begin their move to adopting IFRS what can companies learn from those that have already travelled this path? Nigel Youell, Product Marketing Director for Performance Management Applications at Oracle talks to David Jones, Director at PWC, who has worked with multi-national companies across Europe helping them to make this transition and to improve their financial reporting in the process. This podcast offers those who have not yet started, or are currently undertaking, the IFRS journey the chance to learn from David's considerable experience on how to make IFRS an opportunity for improvement rather than just an enforced change.

    Read the article

  • Tab Sweep: HTML5 Attributes, MDB, JasperReports, Delphi, Security, JDBCRealm, Joomla, ...

    - by arungupta
    Recent Tips and News on Java, Java EE 6, GlassFish & more : • JMS and MDB in Glassfish for 20 minutes (nik_code) • Installing Java EE 6 SDK with Glassfish on a headless system (jvmhost) • JSF + JPA + JasperReports (iReport) Part 2 (Rama krishnnan E P) • Serving Static Content on WebLogic and GlassFish (cdivilly) • Whats the problem with JSF? A rant on wrong marketing arguments (Über Thomas Asel) • JPA 2.1 will support CDI Injection in EntityListener - in Java EE 7 (Craig Ringer) • Java Delphi integration with Glassfish JMS OpenMQ (J4SOFT) • Java EE Security using JDBCRealm Part1 (acoustic091409) • Adding HTML5 attributes to standard JSF components (Bauke Scholtz) • Configuring SAS 9.1 to Use Java 5 or above on Windows (Java EE Tips) • Inject Java Properties in Java EE Using CDI (Piotr Nowicki) • NoClassDefFoundError in Java EE Applications - Part 2 (Java Code Geeks) • NoClassDefFoundError in Java EE Applications - Part 1 (Java Code Geeks) • EJB 3 application in Glassfish 3x (Anirban Chowdhury) • How To Install Mobile Server 11G With GlassFish Server 3.1 (Oracle Support) • Joomla on GlassFish (Survivant)

    Read the article

  • Chicago Architects Group &ndash; Document Generation Architectures

    - by Tim Murphy
    Thank you to everyone who came out to the Chicago Architects Group presentation last night.  It seemed like the weather has a way of keeping a large portion of the people who registered from making the meeting.  There was some lively networking going on before and after the meeting.  I enjoyed the questions that people had during the presentation.  It helped to bring out some of the challenges with dealing with the OOXML and ODF standards from an architecture perspective. I have posted the Slides and Code.  Feel free to contact me with any questions. For those of you who missed the presentation I will be giving a similar one at the Lake County .NET Users Group on June 24th. The next CAG presentation will be July 20th.  The presentation will be Architecting A BI Installation by David Leininger.  Look for the registration to open in the next day or so. del.icio.us Tags: Chicago architects Group,OOXML,ODF,BI,LCNUG,slides,code

    Read the article

  • EPM Architecture: Foundation

    - by Marc Schumacher
    This post is the first of a series that is going to describe the EPM System architecture per component. During the following weeks a couple of follow up posts will describe each component. If applicable, the component will have its standard port next to its name in brackets. EPM Foundation is Java based and consists of two web applications, Shared Services and Workspace. Both applications are accessed by browser through Oracle HTTP Server (OHS) or Internet Information Services (IIS). Communication to the backend database is done by JDBC. The file system to store Lifecycle Management (LCM) artifacts can be either local or remote (e.g. NFS, network share). For authentication purposes, the EPM Product Suite can connect to external directories or databases. Interaction with other EPM Suite components like product specific Lifecycle Management connectors or Reporting and Analysis Web happens through HTTP protocol. The next post will cover Reporting and Analysis.

    Read the article

  • NGN/NLUUG conferentie vj2012: Operating Systems

    - by nospam(at)example.com (Joerg Moellenkamp)
    On April 11th, 2012 the Spring 2012 conference with the topic overarching topic "Operating Systems" takes place in Nieuwegein near Utrecht. Besides talks about Linux, Windows and AIX, there will be a track about Solaris. I will be the first speaker in the Solaris track and giving an overview about Solaris 11 and how features interact. Later on renowned experts like Detlef Drewanz ("Lifecycle Management with Oracle Solaris 11"), Andrew Gabriel ("Solaris 11 Networking - Crossbow Project"), Darren Moffat ("ZFS: Data integrity and Security") and Casper Dik ("Solaris 11 Zones and Immutable Zones") will take over. Finally Patrick Ale of UPC Broadband talks about his experiences with Solaris 11. When you want more information about this conference or register for it, you will find the webpage of the event at the NLUUG site.

    Read the article

  • JCP Calendar for 2013 - First EC Meeting 15-16 January

    - by Heather VanCura
    The JCP 2013 calendar and EC Meeting schedule has now been finalized and published :-). This year the EC will be holding meetings in the San Francisco Bay Area in January, and also in September, scheduled around the JavaOne San Francisco Conference; and, Credit Suisse will host the May EC Meeting in Zurich, Switzerland.  The first JCP EC Meeting of 2013 will be a Face to Face Meeting in Santa Clara, California USA, hosted by Intel.  We are of in the midst preparing the agenda now, but it will include a 2012 annual review, JSR Spec Lead presentations & updates, as well as JSR 358, A major revision of the Java Community Process, Expert Group session. You can view meeting materials and minutes from the JCP EC Meetings on JCP.org. The JCP also plans to meet with the Java User Group Leaders attending the User Group Leaders Summit being held at Oracle Redwood Shores location 14-16 January.

    Read the article

  • HTML5 für APEX Entwickler: Anwendungen der nächsten Generation

    - by Carsten Czarski
    HTML5 ist nicht umsonst eines der meistdiskutierten Themen in der Anwendungsentwicklung: Es eröffnet dem Anwendungsentwickler völlig neue Möglichkeiten zur Gestaltung von Web-Benutzeroberflächen. So ist es möglich, mit HTML5 auf das GPS eines mobilen Geräts zuzugreifen - aber das ist bei weitem nicht alles: Mit SVG und CANVAS-Objekten wird es möglich, frei auf der Browseroberfläche zu zeichnen - die FileReader API erlaubt es, vom Anwender ausgewählte Dateien noch vor dem Hochladen auszulesen. Diese Möglichkeiten erlauben es, völlig neue Anwendungen zu entwickeln - eben Anwendungen der nächsten Generation. Im Webseminar am 8. November wurden einige der Möglichkeiten von HTML5 vorgestellt und gezeigt, wie man sie in APEX-Anwendungen nutzen kann. Die Foliensätze und APEX-Beispielanwendungen sind ab sofort verfügbar: https://apex.oracle.com/folien Schlüsselwort: apex-html5

    Read the article

  • Upgrade Workshop in Sydney - Recap

    - by Mike Dietrich
    Late, but hopefully not too late, a big THANK YOU to everybody who did attend the Upgrade and Migration Workshop in Sydney at the Cliftons past week. You were a really good crowd, thanks for all your questions, the great conversations in the breaks, thanks to the local marketing team for the excellent organization - and we'll looking forward to see you next time again with all your databases then live on Oracle Database 11.2  To download the slides please find them in the Slides Download Center to your right - or use the direct link to download the workshop slide deck. And I really don't understand how you can go to daily work (or to a workshop) with such beaches nearby ... I would immediatelly change my job profile Honestly, Sydney is really a great place. Australia and New Zealand generally are wonderful places and we've met so many great people in Perth, Brisbane, Melbourne, Wellington, Sydney and during our travel in between. Just if there wouldn't be over 20 hours pure flight time in between Germany and Down Under Hope to see you all again next time for 12c -Mike

    Read the article

  • Oracle12c ist da: Neue Features für Entwicker

    - by Carsten Czarski
    Das Warten hat ein Ende. Oracle12c Release 1 steht zum Download bereit. Oracle12c bringt eine Reihe neuer Funktionen für SQL, PL/SQL und APEX Entwickler mit. Mit SQL Pattern Matching, Identify Columns, Code Based Security seien nur drei Beispiele genannt. In unserem aktuellen Community Tipp stellen wir 12 neue Features für Entwickler vor - erfahren Sie, wie Sie mit Oracle12c noch schneller und effizienter entwickeln können. Automatische Sequences und Identity Columns SQL und PL/SQL: Erweiterungen und Verbesserungen PL/SQL: Rechte, Rollen und mehr Oracle Multitenant und APEX SQL Pattern Matching Wann ist die Zeile gültig: Valid Time Temporal : Bei den Kollegen der DBA Community finden Sie entsprechend eine Übersicht mit den für Administratoren und den Datenbankbetrieb interessanten Neuerungen.

    Read the article

  • Die individuelle Lizenz zum Erfolg

    - by A&C Redaktion
    Wer will schon mit anderen über einen Kamm geschoren werden? All unsere Partner sind auf sehr unterschiedliche Bereiche spezialisiert und arbeiten mit einem breiten Spektrum an Kunden, die wiederum eine Vielzahl besonderer Bedürfnisse mitbringen. Dieser Vielfalt entsprechend, bietet Oracle ein ausdifferenziertes Lizenzierungsmodell. Speziell für die unabhängigen Softwarepartner (ISVs) erläutert Senior Channel Manager Sven Jürgens im Gespräch mit Holger Pölzl, welche Form der Lizenzierung zu welchem Vorhaben passt. Neben der klassischen Full Use Lizenz gibt es beispielsweise noch deutlich günstigere Arten, von Application Specific Full Use (ASFU) oder Embedded Software Licensing (ESL) bis hin zu SAAS- oder Hosting-Angeboten. Welches Modell das richtige ist, entscheiden die beiden am liebsten im direkten Gespräch mit dem Partner. Kontaktieren Sie uns: Sven Jürgens und Holger Pölzl.

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Modifying the Default Shipped Template

    - by The Old Toxophilist
    Having installed your Exalogic Virtual environment by default you have a single template which can be used to create your vServers. Although this template is suitable for creating simple test or development vServers it is recommended that you look at creating your own custom vServers that match the environment you wish to build and deploy. Therefore this Tea Time Snippet will take you through the simple process of modifying the standard template. Before You Start To edit the template you will need the Oracle ModifyJeos Utility which can be downloaded from the eDelivery Site. Once the ModifyJeos Utility has been downloaded we can install the rpms onto either an existing vServer or one of the Control vServers. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Alternatively you can install the modify jeos packages on a none Exalogic OEL installation or a VirtualBox image. If you are doing this, assuming OEL 5u8, you will need the following rpms. rpm -ivh ovm-modify-jeos-1.0.1-10.el5.noarch.rpm rpm -ivh ovm-el5u2-xvm-jeos-1.0.1-5.el5.i386.rpm rpm -ivh ovm-template-config-1.0.1-5.el5.noarch.rpm Base Template If you have installed the modify onto a vServer running on the Exalogic then simply mount the /export/common/images from the ZFS storage and you will be able to find the el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz (or similar depending which version you have) template file. Alternatively the latest can be downloaded from the eDelivery Site. Now we have the Template tgz we will need the extract it as follows: tar -zxvf  el_x2-2_base_linux_guest_vm_template_2.0.1.1.0_64.tgz This will create a directory called BASE which will contain the System.img (VServer image) and vm.cfg (VServer Config information). This directory should be renamed to something more meaning full that indicates what we have done to the template and then the Simple name / name in the vm.cfg editted for the same reason. Modifying the Template Resizing Root File System By default the shipped template has a root size of 4 GB which will leave a vServer created from it running at 90% full on the root disk. We can simply resize the template by executing the following: modifyjeos -f System.img -T <New Size MB>) For example to imcrease the default 4 GB to 40 GB we would execute: modifyjeos -f System.img -T 40960) Resizing Swap We can modify the size of the swap space within a template by executing the following: modifyjeos -f System.img -S <New Size MB>) For example to increase the swap from the default 512 MB to 4 GB we would execute: modifyjeos -f System.img -S 4096) Changing RPMs Adding RPMs To add RPMs using modifyjeos, complete the following steps: Add the names of the new RPMs in a list file, such as addrpms.lst. In this file, you should list each new RPM in a separate line. Ensure that all of the new RPMs are in a single directory, such as rpms. Run the following command to add the new RPMs: modifyjeos -f System.img -a <path_to_addrpms.lst> -m <path_to_rpms> -nogpg Where <path_to_addrpms.lst> is the path to the location of the addrpms.lst file, and <path_to_rpms> is the path to the directory that contains the RPMs. The -nogpg option eliminates signature check on the RPMs. Removing RPMs To remove RPM s using modifyjeos, complete the following steps: Add the names of the RPMs (the ones you want to remove) in a list file, such as removerpms.lst. In this file, you should list each RPM in a separate line. The Oracle Exalogic Elastic Cloud Administrator's Guide provides a list of all RPMs that must not be removed from the vServer. Run the following command to remove the RPMs: modifyjeos -f System.img -e <path_to_removerpms.lst> Where <path_to_removerpms.lst> is the path to the location of the removerpms.lst file. Mounting the System.img For all other modifications that are not supported by the modifyjeos command (adding you custom yum repositories, pre configuring NTP, modify default NFSv4 Nobody functionality, etc) we can mount the System.img and access it directly. To facititate quick and easy mounting/unmounting of the System.img I have put together the simple scripts below. MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG UnmountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Assume the $LOOP & $SYSTEMIMG exist from a previous run on the MountSystemImg.sh umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP Packaging the Template Once you have finished modifying the template it can be simply repackaged and then imported into EMOC as described in "Exalogic 2.0.1 Tea Break Snippets - Importing Public Server Template". To do this we will simply cd to the directory above that containing the modified files and execute the following: tar -zcvf <New Template Directory> <New Template Name>.tgz The resulting.tgz file can be copied to the images directory on the ZFS and uploadd using the IB network. This entry was originally posted on the The Old Toxophilist Site.

    Read the article

  • NetBeans, JSF, and MySQL Primary Keys using AUTO_INCREMENT

    - by MarkH
    I recently had the opportunity to spin up a small web application using JSF and MySQL. Having developed JSF apps with Oracle Database back-ends before and possessing some small familiarity with MySQL (sans JSF), I thought this would be a cakewalk. Things did go pretty smoothly...but there was one little "gotcha" that took more time than the few seconds it really warranted. The Problem Every DBMS has its own way of automatically generating primary keys, and each has its pros and cons. For the Oracle Database, you use a sequence and point your Java classes to it using annotations that look something like this: @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="POC_ID_SEQ") @SequenceGenerator(name="POC_ID_SEQ", sequenceName="POC_ID_SEQ", allocationSize=1) Between creating the actual sequence in the database and making sure you have your annotations right (watch those typos!), it seems a bit cumbersome. But it typically "just works", without fuss. Enter MySQL. Designating an integer-based field as PRIMARY KEY and using the keyword AUTO_INCREMENT makes the same task seem much simpler. And it is, mostly. But while NetBeans cranks out a superb "first cut" for a basic JSF CRUD app, there are a couple of small things you'll need to bring to the mix in order to be able to actually (C)reate records. The (RUD) performs fine out of the gate. The Solution Omitting all design considerations and activity (!), here is the basic sequence of events I followed to create, then resolve, the JSF/MySQL "Primary Key Perfect Storm": Fire up NetBeans. Create JSF project. Create Entity Classes from Database. Create JSF Pages from Entity Classes. Test run. Try to create record and hit error. It's a simple fix, but one that was fun to find in its completeness. :-) Even though you've told it what to do for a primary key, a MySQL table requires a gentle nudge to actually generate that new key value. Two things are needed to make the magic happen. First, you need to ensure the following annotation is in place in your Java entity classes: @GeneratedValue(strategy = GenerationType.IDENTITY) All well and good, but the real key is this: in your controller class(es), you'll have a create() function that looks something like this, minus the comment line and the setId() call in bold red type:     public String create() {         try {             // Assign 0 to ID for MySQL to properly auto_increment the primary key.             current.setId(0);             getFacade().create(current);             JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("CategoryCreated"));             return prepareCreate();         } catch (Exception e) {             JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));             return null;         }     } Setting the current object's primary key attribute to zero (0) prior to saving it tells MySQL to get the next available value and assign it to that record's key field. Short and simple…but not inherently obvious if you've never used that particular combination of NetBeans/JSF/MySQL before. Hope this helps! All the best, Mark

    Read the article

  • Pro SharePoint 2010 Business Intelligence Solutions

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Oh yeah baby, it’s out finally! This book is what I wanted to write for so long now, but never really got a chance to. For SharePoint 2007, I authored the SharePoint section of “Smart BI Solutions with SQL Server 2008” for MS Press. But never really got the time, to author a full book that this topic deserved. Until SharePoint 2010, we actually have a full book on this topic. So first things first, I didn’t actually write it. My role was limited to the overall concept, the outline, the layout, completion of it, code samples, identifying what we need in here, vouching for technical accuracy, identifying authors etc. The real work was done by Srini (5 chapters), and Steve (1 chapter). So credit given where it is due. But, with that said, this is a pretty good book. It has always been a challenge to find the superman that knows both, data ware housing concepts, and SharePoint concepts. The data ware housing concepts include basic stuff you need to know to work in the BI area such as cubes, MDX queries, etc. So chapter 1 covers that – and if you’re a hardcore DBA, feel free to skip Chapter 1. Then beyond that, we take every single SharePoint 2010 BI topic, and slice and dice it in detail. The topics we deal with are - Visio Services Reporting services Business Connectivity Services Excel Services PerformancePoint Services And in covering each of these topics, we ensure that a general layout was followed for each topic, to ensure completeness of content. We make sure we cover Setup related issues and advice Point and click usage Code usage, i.e. extensibility using visual studio and a walkthrough of the administration side of things, including powershell. (Yes, I insisted on that in being there in every chapter). Writing a book is always a lot of work, so we hope you find it useful. And it should go very well with the other book I just reviewed, which is Microsoft ADO.NET 4, step by step. Comment on the article ....

    Read the article

< Previous Page | 489 490 491 492 493 494 495 496 497 498 499 500  | Next Page >