Search Results

Search found 16971 results on 679 pages for 'blogs'.

Page 370/679 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Behavior Driven Development (BDD) and DevExpress XAF

    - by Patrick Liekhus
    So in my previous posts I showed you how I used EDMX to quickly build my business objects within XPO and XAF.  But how do you test whether your business objects are actually doing what you want and verify that your business logic is correct?  Well I was reading my monthly MSDN magazine last last year and came across an article about using SpecFlow and WatiN to build BDD tests.  So why not use these same techniques to write SpecFlow style scripts and have them generate EasyTest scripts for use with XAF.  Let me outline and show a few things below.  I plan on releasing this code in a short while, I just wanted to preview what I was thinking. Before we begin… First, if you have not read the article in MSDN, here is the link to the article that I found my inspiration.  It covers the overview of BDD vs. TDD, how to write some of the SpecFlow syntax and how use the “Steps” logic to create your own tests. Second, if you have not heard of EasyTest from DevExpress I strongly recommend you review it here.  It basically takes the power of XAF and the beauty of your application and allows you to create text based files to execute automated commands within your application. Why would we do this?  Because as you will see below, the cucumber syntax is easier for business analysts to interpret and digest the business rules from.  You can find most of the information you will need on Cucumber syntax within The Secret Ninja Cucumber Scrolls located here.  The basics of the syntax are that Given X When Y Then Z.  For example, Given I am at the login screen When I enter my login credentials Then I expect to see the home screen.  Pretty easy syntax to follow. Finally, we will need to download and install SpecFlow.  You can find it on their website here.  Once you have this installed then let’s write our first test. Let’s get started… So where to start.  Create a new testing project within your solution.  I typically call this with a similar naming convention as used by XAF, my project name .FunctionalTests (i.e.  AlbumManager.FunctionalTests).  Remove the basic test that is created for you.  We will not use the default test but rather create our own SpecFlow “Feature” files.  Add a new item to your project and select the SpecFlow Feature file under C#.  Name your feature file as you do your class files after the test they are performing. Now you can crack open your new feature file and write the actual test.  Make sure to have your Ninja Scrolls from above as it provides valuable resources on how to write your test syntax.  In this test below you can see how I defined the documentation in the Feature section.  This is strictly for our purposes of readability and do not effect the test.  The next section is the Scenario Outline which is considered a test template.  You can see the brackets <> around the fields that will be filled in for each test.  So in the example below you can see that Given I am starting a new test and the application is open.  This means I want a new EasyTest file and the windows application generated by XAF is open.  Next When I am at the Albums screen tells XAF to navigate to the Albums list view.  And I click the New:Album button, tells XAF to click the new button on the list grid.  And I enter the following information tells XAF which fields to complete with the mapped values.  And I click the Save and Close button causes the record to be saved and the detail form to be closed.  Then I verify results tests the input data against what is visible in the grid to ensure that your record was created. The Scenarios section gives each test a unique name and then fills in the values for each test.  This way you can use the same test to make multiple passes with different data. Almost there.  Now we must save the feature file and the BDD tests will be written using standard unit test syntax.  This is all handled for you by SpecFlow so just save the file.  What you will see in your Test List Editor is a unit test for each of the above scenarios you just built. You can now use standard unit testing frameworks to execute the test as you desire.  As you would expect then, these BDD SpecFlow tests can be automated into your build process to ensure that your business requirements are satisfied each and every time. How does it work? What we have done is to intercept the testing logic at runtime to interpret the SpecFlow syntax into EasyTest syntax.  This is the basic StepDefinitions that we are working on now.  We expect to put these on CodePlex within the next few days.  You can always override and make your own rules as you see fit for your project.  Follow the MSDN magazine above to start your own.  You can see part of our implementation below. As you can gather from the MSDN article and the code sample below, we have created our own common rules to build the above syntax. The code implementation for these rules basically saves your information from the feature file into an EasyTest file format.  It then executes the EasyTest file and parses the XML results of the test.  If the test succeeds the test is passed.  If the test fails, the EasyTest failure message is logged and the screen shot (as captured by EasyTest) is saved for your review. Again we are working on getting this code ready for mass consumption, but at this time it is not ready.  We will post another message when it is ready with all details about usage and setup. Thanks

    Read the article

  • BizTalk 2009 - Project Creation Failed

    - by StuartBrierley
    A couple of weeks ago I had some issue with my BizTalk Server 2009 development environment  which resulted in a reinstall of Visual Studio 2008 and the Visual Studio 2008 Service Pack 1. Following this reinstall I began to have problems when trying to create  BizTalk 2009 projects: Error Details: “Create BizTalk Project …. Project Creation Failed” It turns out that this is a known issue with BizTalk Server 2009 and Visual Studio 2008, whereby the installation of the Visual Studio Service Pack 1 can cause corruption to the BizTalk installation preventing the creation of any new projects. To resolve this issue go to control panel > add or remove programs > Microsoft BizTalk Server 2009 and select Change or Remove.  When the window opens choose “Repair”.  Upon completion you should once again be able to create BizTalk projects.

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • To SYNC or not to SYNC – Part 3

    - by AshishRay
    I can't believe it has been almost a year since my last blog post. I know, that's an absolute no-no in the blogosphere. And I know that "I have been busy" is not a good excuse. So - without trying to come up with an excuse - let me state this - my apologies for taking such a long time to write the next Part. Without further ado, here goes. This is Part 3 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In this article, I will talk in details regarding why Data Guard SYNC is a good thing. I will also talk about distance implications for setting up such a configuration. So, Why Good? Why is Data Guard SYNC a good thing? Because, at the end of the day, this gives you the assurance of zero data loss - it doesn’t matter what outage may befall your primary system. Befall! Boy, that sounds theatrical. But seriously - think about this - it minimizes your data risks. That’s a big deal. Whether you have an outage due to bad disks, faulty hardware components, hardware / software bugs, physical data corruptions, power failures, lightning that takes out significant part of your data center, fire that melts your assets, water leakage from the cooling system, human errors such as accidental deletion of online redo log files - it doesn’t matter - you can have that “Om - peace” look on your face and then you can failover to the standby system, without losing a single bit of data in your Oracle database. You will be a hero, as shown in this not so imaginary conversation: IT Manager: Well, what’s the status? You: John is doing the trace analysis on the storage array. IT Manager: So? How long is that gonna take? You: Well, he is stuck, waiting for a response from <insert your not-so-favorite storage vendor here>. IT Manager: So, no root cause yet? You: I told you, he is stuck. We have escalated with their Support, but you know how long these things take. IT Manager: Darn it - the site is down! You: Not really … IT Manager: What do you mean? You: John is stuck, but Sreeni has already done a failover to the Data Guard standby. IT Manager: Whoa, whoa - wait! Failover means we lost some data, why did you do this without letting the Business group know? You: We didn’t lose any data. Remember, we had set up Data Guard with SYNC? So now, any problems on the production – we just failover. No data loss, and we are up and running in minutes. The Business guys don’t need to know. IT Manager: Wow! Are we great or what!! You: I guess … Ok, so you get it - SYNC is good. But as my dear friend Larry Carpenter says, “TANSTAAFL”, or "There ain't no such thing as a free lunch". Yes, of course - investing in Data Guard SYNC means that you have to invest in a low-latency network, you have to monitor your applications and database especially in peak load conditions, and you cannot under-provision your standby systems. But all these are good and necessary things, if you are supporting mission-critical apps that are supposed to be running 24x7. The peace of mind that this investment will give you is priceless, especially if you are serious about HA. How Far Can We Go? Someone may say at this point - well, I can’t use Data Guard SYNC over my coast-to-coast deployment. Most likely - true. So how far can you go? Well, we have customers who have deployed Data Guard SYNC over 300+ miles! Does this mean that you can also deploy over similar distances? Duh - no! I am going to say something here that most IT managers don’t like to hear - “It depends!” It depends on your application design, application response time / throughput requirements, network topology, etc. However, because of the optimal way we do SYNC, customers have been able to stretch Data Guard SYNC deployments over longer distances compared to traditional, storage-centric ways of doing this. The MAA Database 10.2 best practices paper Data Guard Redo Transport & Network Configuration, and Oracle Database 11.2 High Availability Best Practices Manual talk about some of these SYNC-related metrics. For example, a test deployment of Data Guard SYNC over 330 miles with 10ms latency showed an impact less than 5% for a busy OLTP application. Even if you can’t deploy Data Guard SYNC over your WAN distance, or if you already have an ASYNC standby located 1000-s of miles away, here’s another nifty way to boost your HA. Have a local standby, configured SYNC. How local is “local”? Again - it depends. One customer runs a local SYNC standby across the campus. Another customer runs it across 15 miles in another data center. Both of these customers are running Data Guard SYNC as their HA standard. If a localized outage affects their primary system, no problem! They have all the data available on the standby, to which they can failover. Very fast. In seconds. Wait - did I say “seconds”? Yes, Virginia, there is a Santa Claus. But you have to wait till the next blog article to find out more. I assure you tho’ that this time you won’t have to wait for another year for this.

    Read the article

  • Upcoming GWB Site Maintenance & Downtime This Weekend

    - by Staff of Geeks
    We'll be performing routine maintenance and a code release this weekend, from late Saturday night to early Sunday morning. There will be moments of site downtime but we'll minimize this as much as possible of course. We intend for the following fixes & features to go to production: Over 30 Windows Update hotfixes & security updatesBug Fix: Homepage of GWB currently listing posts by create date, but should be listing by first-time publish date. Thanks to Chris Gardner for alerting us about this. Bug Fix: Broken thumbnail images in the Hot Topics and Most Popular areas. Thanks to .ToString(theory) for emphasizing this one. Bug Fix: Not able to create/edit posts in the admin tool using IE 10. (Thanks Benny Matthew)Bug Fix: Admin blog post rich text editor not working in IE 10. Bug Fix: New Twitter connections cannot be established because the twitter API URL has changed. Feature: New "Minimal" Template using fluid Twitter Bootstrap/Cerulean theme. Feature: Integration with AirBrake exception handling.Feature: Change bio pics in the GWB main feed to be hyperlinked.Feature: Change hyperlink of MVP icons in the GBW Blogger List area to go directly to the Microsoft MVP search results page for that MVP's name. Thanks once again for your patience as we strive to improve the site!Ben BarrethGeeksWithBlogs Community Builder/Software Developer

    Read the article

  • WebCenter Customer Spotlight: Global Village Telecom Ltda

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGlobal Village Telecom Ltda. (GVT)  is a leading Brazilian telecommunications company, developing solutions and providing services for corporate and end users. GVT is located in Curitiba, Brazil, employs 6,000 people and has an annual revenue of around US$1 billion.  GVT business objectives were to improve corporate communications, accelerate internal information flow, provide continuous access to the all business files and  enable the company’s leadership to provide information to all departments in real time. GVT implemented Oracle WebCenter Content to centralize the company's content and they built  a portal to share and find content in real-time. Oracle WebCenter Content enabled GVT to quickly and efficiently integrate communication among all company employees—ensuring that GVT maintain a competitive edge in the market. Human Resources reduced the time required for issuing internal statements to all staff from three weeks to one day. Company OverviewGlobal Village Telecom Ltda. (GVT)  is a leading telecommunications company, developing solutions and providing services for corporate and end users. The company offers diverse innovative products and advanced solutions in conventional fixed telephone communications, data transmission, high speed broadband internet services, and voice over IP (VoIP) services for all market segment. GVT is located in Curitiba, Brazil, employs 6,000 people and have an  annual revenue of around US$1 billion.   Business ChallengesGVT business objectives were to improve corporate communications, accelerate internal information flow, provide continuous access to the all business files and enable the company’s leadership to provide information to all departments in real time. Solution DeployedGVT worked with the Oracle Partner IT7 to deploy Oracle WebCenter Content to securely centralize the company's content such as growth indicators, spreadsheets, and corporate and descriptive project schedules. The solution enabled real-time information sharing through the development of Click GVT, a portal that currently receives 100,000 monthly impressions from employee searches. Business ResultsGVT gained a competitive edge in the communications market by accelerating internal information flow, streamlining the content standardizing information and enabled real-time information sharing and discovery. Human Resources  reduced the time required for issuing  internal statements to all staff from three weeks to one day. “The competitive nature of telecommunication industry demands rapid information in the internal flow of the company. Oracle WebCenter Content enabled us to quickly and efficiently integrate communication among all company employees—ensuring that we maintain a competitive edge in the market.” Marcel Mendes Filho, Systems Manager, Global Village Telecom Ltda. Additional Information Global Viallage Telecom Ltda Customer Snapshot Oracle WebCenter Content

    Read the article

  • 14540059 - UPDATE FOR BI PUBLISHER ENTERPRISE 11.1.1.6.0 AUGUST

    - by Tim Dexter
    Its been a while, I know :( I have posts in the pipe just gotta smoke em out! The latest update for BIP 11.1.1.6 was released last week. A bunch of defects have been addressed as you can see below.  13473493 - XMLP TRANSLATION ISSUE OF MILLION (ENG) TO MILLIONES (SPANISH) 13521951 - BIP UPGRADE FROM 10G TO 11.1.1.5.0 IS NOT SUCCESSFULL FOR TIAA-CREF  12542914 - ACC: REPORT VIEWER STRUCTURE HAS ERRORS - NO IFRAME AND NO LANG ATTRIBUTE  13562801 - XML TAG DISPLAY SHOULD DEFAULT TO 'FOLLOW THE DATA 13568043 - BIP QUERY FAILING VALIDATION DUE TO 'COALESCE' KEYWORD 13592901 - THE REPORT IS THROWING AN SQL ERROR THAT REFERENCES CHECKING FOR NULL VALUES 13836696 - BI PUBLISHER REPORT NOT GENERATED WHEN A TEXT FIELD START WITH "E.<SPACE>"  13879206 - DM MIGRATION ISSUES 13888939 - DM: LOV SEARCH CAUSING DB CONNECTION LEAK 13904225 - XSLX ERROR DUE TO URL LINK AND USE OF LIST 13930795 - RTF TEMPLATE GIVING DIFFERENT RESULTS IN DIFFERENT  13942064 - XDOEXCEPTION THROWN WHEN RUNNING PEOPLESOFT TEMPLATES AND XML FILE 13981523 - BI PUBLISHER ON 64-BIT WINDOWS CAN'T CONNECT TO MS ANALYSIS SERVICES CUBE 14039229 - BIP 11.1.1.5.0 REPORTS ARE NOT WORKING ON BIP 11.1.1.6.0  14055793 - BIP 11.1.1.6.0: DATE TYPE INPUT PARAMTER IS NOT DISPLAYING THE CORRECT VALUE USI  14059851 - UNABLE TO GRANT PRIVILEGES TO ROLE: DOMAIN USERS; THE ROLE DOES NOT EXIST 14109967 - LARGE OUTPUT CAUSES OUT OF MEMORY DUE TO LEFT OVER DEBUG CODE 14163973 - ISSUES USING DATA MODEL EDITOR IN BIP 11.1.1.6  14167915 - ORG.XML.SAX.SAXEXCEPTION: DATE FORMAT CANNOT BE NULL  14240045 - EDITING SCHEDULED REPORTS DOES NOT REFLECT VALID VALUES FOR UPGRADED SCHEDULES 14304427 - SEARCH DIALOG NOT BINDING PARAMETER VALUE - INVALID PARAMETER BINDING(S). 14338158 - PASSWORD FIELD SHOULD NOT BE DISPLAYED FOR FMW SECURITY MODEL 14393825 - OBIEE11G: LARGE NUMBER OF OBIPS SESSIONS CREATED WHEN USING SSO AND BI PUB 14558377 - CONT. BUG 14240045:EDITING SCHEDULES IN BI PUBLISHER IS DEFAULTING TO 'ALL' This patch is just for BI Publisher standalone installs. For those of you using BIP within the wider BIEE suite there is the 11.1.1.6.2 BP1 patchset. More details on that here.

    Read the article

  • Using Microsoft Excel as a Source and a Target in Oracle Data Integrator

    - by julien.testut
    The posts in this series assume that you have some level of familiarity with ODI. The concepts of Models, Datastores, Logical Schema, Knowledge Modules and Interfaces are used here assuming that you understand them in the context of ODI. If you need more details on these elements, please refer to the ODI Tutorial for a quick introduction, or to the complete ODI documentation for more details. Recently we saw how to create a create a connection to Microsoft Excel let's now take a look at how we can use Microsoft Excel as a source or a target in ODI interfaces. Create a Model in Designer First we need to create a new Model and a datastore for our Microsoft Excel spreadsheet. In Designer open up the Models view and insert a new Model. Give a name to your model, I used EXCEL_SRC_CITY.

    Read the article

  • Oracle JDK 7u10 released with new security features

    - by Henrik Stahl
    A few days ago, we released JRE and JDK 7 update 10. This release adds support for the following new platforms: Windows 8 on x86-64. Note that Modern UI (aka Metro) mode is not supported. Internet Explorer 10 on Windows 8. Mac OS X 10.8 (Mountain Lion) This release also introduces new features that provide enhanced security for Java applet and webstart applications, specifically: The Java runtime tracks if it is updated to the latest security baseline. If you try to execute an unsigned applet with an outdated version of Java, a warning dialog will prompt you to update before running the applet. The Java runtime includes a hardcoded best before date. It is assumed that a new version will be released before this date. If the client has not been able to check for an update prior to this date, the Java runtime will assume that it is insecure and start warning the user prior to executing any applets. The Java control panel now includes an option to set the desired security level on a low-medium-high-very high scale, as well as an option to disable Java applets and webstart entirely. This level controls things such as if the Java runtime is allowed to execute unsigned code, and if so what type of warning will be displayed to the user. More details on the security settings can be found in the documentation. See below for a sample screenshot. The new update of the JRE and the JDK are available via OTN. To learn more about the release please visit the release notes.

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • Oracle went back to school !....

    - by Cristina Ciocoiu
    I am Georgiana, Contracts Manager for Oracle University and Advanced Customer Services in Romania. I started working for Oracle for 4 years ago as a Contracts Specialist. Two years ago I became a manager of a team of 9 Contracts Specialists. On a sunny day in March some members of my team visited the students of the Academy of Economic Studies, accompanied by Recruitment colleagues. This was part of a new initiative to raise awareness on career opportunities at Oracle. We spent approximately 2 hours illustrating and explaining different aspects of the day-to-day activities of an Oracle Contracts Specialist to the future graduates of the Academy. Role Play Since a role play is worth 1000 job descriptions, the audience witnessed an entertaining performance on the contracting process from the phase of the negotiation with the customer to actual signing of the contract. The main focus was on the role of Contracts Specialist liaising with all the groups involved and ensuring that the contract is compliant with Oracle policies while generating the expected revenue. However, the team took other roles as well i.e. Sales Representative, Customer, Business Approver and Lawyer to demonstrate their role in the process. As each of these roles only have a small slice of the big pie, it is vital to understand what happens before and after you come on stage as a Contract Specialist. Contracts Specialist Being a Contracts Specialist goes beyond simply knowing what policies apply, it means understanding Oracle’s core business model, understanding customers’ requests and addressing them in the most effective way. The job also involves connecting smaller teams that are often geographically dispersed across multiple regions so that they become a bigger, stronger and successful team. You are the expert in this key position that can facilitate the closing of a deal or stop it from happening if the risk is too high. The role play provided insights on both. Why I love this job Events of this kind are sometimes just as useful for the “recruiters” as for the “recruits”. For me, as a presenter, it was an excellent opportunity to think about the many reasons why I love what I do in the Contracts department every day and to share this with the students. I wanted to explain to the audience, who are still considering education and career possibilities, that what we do in Contracts DOES make a difference. You have the power to achieve targets that you did not think reachable before. Working in the dynamic Oracle environment shapes you as a person and there is a lot to take away from this experience. Looking back to my years in the Academy (I graduated from the Academy myself), I wish I could have listened to more people talking about their great jobs and about how I could get there. If those were Oracle people I might have been writing this article sooner. J If you are interested to join the Contracts team please click here for more information or contact lavinia.protopopescu-AT-oracle-DOT-com. You can find all openings in Romania via http://campus.oracle.com

    Read the article

  • JSR Updates - Multiple JSRs migrate to latest JCP version

    - by Heather VanCura
    As part of the JCP.Next reform effort, many JSRs have migrated to the latest version of the JCP program in the last month.  These JSRs' Spec Leads and Expert Groups are contributing to the strides the JCP has been making to enable greater community transparency, participation and agility to the working of the JSR development through the JCP program. Any other JSR Spec Leads interested in migrating to the latest JCP version, now JCP 2.9, as of 13 November, incorporating the Merged Executive Committee (EC), see the Spec Lead Guide for instructions on migrating to the latest JCP version.  For JCP 2.8 JSRs, you are effectively already operating under JCP 2.9 since there are no longer two ECs.  This is the difference for JCP 2.8 JSRs migrating to JCP 2.9 -- a merged EC.  To make the migration official, just inform your Expert Group on a public channel and email your request to admin at jcp.org. JSR 310, Date and Time API, led by Stephen Colebourne and Michael Nascimento and Oracle (Roger Riggs)  JSR 349, Bean Valirdation 1.1, led by RedHat (Emmanuel Bernard) JSR 350, Java State Management, led by Oracle (Mitch Upton) JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services, led by Oracle, (Santiago Pericas-Geertsen and Marek Potociar) JSR 347, Data Grids for the Java Platform, led by RedHat (Manik Surtani)

    Read the article

  • 12c - SQL Text Expansion

    - by noreply(at)blogger.com (Thomas Kyte)
    Here is another small but very useful new feature in Oracle Database 12c - SQL Text Expansion.  It will come in handy in two cases:You are asked to tune what looks like a simple query - maybe a two table join with simple predicates.  But it turns out the two tables are each views of views of views and so on... In other words, you've been asked to 'tune' a 15 page query, not a two liner.You are asked to take a look at a query against tables with VPD (virtual private database) policies.  In order words, you have no idea what you are trying to 'tune'.A new function, EXPAND_SQL_TEXT, in the DBMS_UTILITY package makes seeing what the "real" SQL is quite easy. For example - take the common view ALL_USERS - we can now:ops$tkyte%ORA12CR1> variable x clobops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from all_users',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."USERNAME" "USERNAME","A1"."USER_ID" "USER_ID","A1"."CREATED" "CREATED","A1"."COMMON" "COMMON" FROM  (SELECT "A4"."NAME" "USERNAME","A4"."USER#" "USER_ID","A4"."CTIME" "CREATED",DECODE(BITAND("A4"."SPARE1",128),128,'YES','NO') "COMMON" FROM "SYS"."USER$" "A4","SYS"."TS$" "A3","SYS"."TS$" "A2" WHERE "A4"."DATATS#"="A3"."TS#" AND "A4"."TEMPTS#"="A2"."TS#" AND "A4"."TYPE#"=1) "A1"Now it is easy to see what query is really being executed at runtime - regardless of how many views of views you might have.  You can see the expanded text - and that will probably lead you to the conclusion that maybe that 27 table join to 25 tables you don't even care about might better be written as a two table join.Further, if you've ever tried to figure out what a VPD policy might be doing to your SQL, you know it was hard to do at best.  Christian Antognini wrote up a way to sort of see it - but you never get to see the entire SQL statement: http://www.antognini.ch/2010/02/tracing-vpd-predicates/.  But now with this function - it becomes rather trivial to see the expanded SQL - after the VPD has been applied.  We can see this by setting up a small table with a VPD policy ops$tkyte%ORA12CR1> create table my_table  2  (  data        varchar2(30),  3     OWNER       varchar2(30) default USER  4  )  5  /Table created.ops$tkyte%ORA12CR1> create or replace  2  function my_security_function( p_schema in varchar2,  3                                 p_object in varchar2 )  4  return varchar2  5  as  6  begin  7     return 'owner = USER';  8  end;  9  /Function created.ops$tkyte%ORA12CR1> begin  2     dbms_rls.add_policy  3     ( object_schema   => user,  4       object_name     => 'MY_TABLE',  5       policy_name     => 'MY_POLICY',  6       function_schema => user,  7       policy_function => 'My_Security_Function',  8       statement_types => 'select, insert, update, delete' ,  9       update_check    => TRUE ); 10  end; 11  /PL/SQL procedure successfully completed.And then expanding a query against it:ops$tkyte%ORA12CR1> begin  2          dbms_utility.expand_sql_text  3          ( input_sql_text => 'select * from my_table',  4            output_sql_text => :x );  5  end;  6  /PL/SQL procedure successfully completed.ops$tkyte%ORA12CR1> print xX--------------------------------------------------------------------------------SELECT "A1"."DATA" "DATA","A1"."OWNER" "OWNER" FROM  (SELECT "A2"."DATA" "DATA","A2"."OWNER" "OWNER" FROM "OPS$TKYTE"."MY_TABLE" "A2" WHERE "A2"."OWNER"=USER@!) "A1"Not an earth shattering new feature - but extremely useful in certain cases.  I know I'll be using it when someone asks me to look at a query that looks simple but has a twenty page plan associated with it!

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • Using the full tty real estate with sqlplus

    - by katsumii
    I believe 'rlwrap' is widely used for adding 'sqlplus' the history function and command line editing. 'rlwrap' has other functions and here's my kludgy alias to force sqlplus to use the full real estate of yourtty. Be it PuTTy session from Windows or Linux gnome terminal. $ declare -f sqlplus sqlplus () { PRE_TEXT=$(resize |sed -n "1s/COLUMNS=/set linesize /p;2s/LINES=/set pagesize /p"|while read line; do printf "%s \ " "$line";done); if [ -z "$1" ]; then rlwrap -m -P "$PRE_TEXT" sqlplus /nolog; else if ! echo $1 | grep '^-' > /dev/null; then rlwrap -m -P "$PRE_TEXT connect $*" sqlplus /nolog; else command sqlplus $*; fi; fi }

    Read the article

  • How-to read data from selected tree node

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} By default, the SelectionListener property of an ADF bound tree points to the makeCurrent method of the FacesCtrlHierBinding class in ADF to synchronize the current row in the ADF binding layer with the selected tree node. To customize the selection behavior, or just to read the selected node value in Java, you override the default configuration with an EL string pointing to a managed bean method property. In the following I show how you change the selection listener while preserving the default ADF selection behavior. To change the SelectionListener, select the tree component in the Structure Window and open the Oracle JDeveloper Property Inspector. From the context menu, select the Edit option to create a new listener method in a new or an existing managed bean. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For this example, I created a new managed bean. On tree node select, the managed bean code prints the selected tree node value(s) import java.util.List; import javax.el.ELContext; import javax.el.ExpressionFactory; import javax.el.MethodExpression; import javax.faces.application.Application; import javax.faces.context.FacesContext; import java.util.Iterator; import oracle.adf.view.rich.component.rich.data.RichTree; import oracle.jbo.Row; import oracle.jbo.uicli.binding.JUCtrlHierBinding; import oracle.jbo.uicli.binding.JUCtrlHierNodeBinding; import org.apache.myfaces.trinidad.event.SelectionEvent; import org.apache.myfaces.trinidad.model.CollectionModel; import org.apache.myfaces.trinidad.model.RowKeySet; import org.apache.myfaces.trinidad.model.TreeModel; public class TreeSampleBean { public TreeSampleBean() {} public void onTreeSelect(SelectionEvent selectionEvent) { //original selection listener set by ADF //#{bindings.allDepartments.treeModel.makeCurrent} String adfSelectionListener = "#{bindings.allDepartments.treeModel.makeCurrent}";   //make sure the default selection listener functionality is //preserved. you don't need to do this for multi select trees //as the ADF binding only supports single current row selection     /* START PRESERVER DEFAULT ADF SELECT BEHAVIOR */ FacesContext fctx = FacesContext.getCurrentInstance(); Application application = fctx.getApplication(); ELContext elCtx = fctx.getELContext(); ExpressionFactory exprFactory = application.getExpressionFactory();   MethodExpression me = null;   me = exprFactory.createMethodExpression(elCtx, adfSelectionListener,                                           Object.class, newClass[]{SelectionEvent.class});   me.invoke(elCtx, new Object[] { selectionEvent });     /* END PRESERVER DEFAULT ADF SELECT BEHAVIOR */   RichTree tree = (RichTree)selectionEvent.getSource(); TreeModel model = (TreeModel)tree.getValue();  //get selected nodes RowKeySet rowKeySet = selectionEvent.getAddedSet();   Iterator rksIterator = rowKeySet.iterator();   //for single select configurations,this only is called once   while (rksIterator.hasNext()) {     List key = (List)rksIterator.next();     JUCtrlHierBinding treeBinding = null;     CollectionModel collectionModel = (CollectionModel)tree.getValue();     treeBinding = (JUCtrlHierBinding)collectionModel.getWrappedData();     JUCtrlHierNodeBinding nodeBinding = null;     nodeBinding = treeBinding.findNodeByKeyPath(key);     Row rw = nodeBinding.getRow();     //print first row attribute. Note that in a tree you have to     //determine the node type if you want to select node attributes     //by name and not index      String rowType = rw.getStructureDef().getDefName();       if(rowType.equalsIgnoreCase("DepartmentsView")){      System.out.println("This row is a department: " +                          rw.getAttribute("DepartmentId"));     }     else if(rowType.equalsIgnoreCase("EmployeesView")){      System.out.println("This row is an employee: " +                          rw.getAttribute("EmployeeId"));     }        else{       System.out.println("Huh????");     }     // ... do more useful stuff here   } } -------------------- Download JDeveloper 11.1.2.1 Sample Workspace

    Read the article

  • HTML5 data-* (custom data attribute)

    - by Renso
    Goal: Store custom data with the data attribute on any DOM element and retrieve it. Previously under HTML4 we used to use classes to store custom data, something to the affect of <input class="account void limit-5000 over-4999" /> and then have to parse the data out of the class In a book published by Peter-Paul Koch in 2007, ppk on JavaScript, he explains why and how to use custom attributes to make data more accessible to JavaScript, using name-value pairs. Accessing a custom attribute account-limit=5000 is much easier and more intuitive than trying to parse it out of a class, Plus, what if the class name for example "color-5" has a representative class definition in a CSS stylesheet that hides it away or worse some JavaScript plugin that automatically adds 5000 to it, or something crazy like that, just because it is a valid class name. As you can see there are quite a few reasons why using classes is a bad design and why it was important to define custom data attributes in HTML5. Syntax: You define the data attribute by simply prefixing any data item you want to store with any HTML element with "data-". For example to store our customers account data with a hidden input element: <input type="hidden" data-account="void" data-limit=5000 data-over=4999  /> How to access the data: account  -     element.dataset.account limit    -     element.dataset.limit You can also access it by using the more traditional get/setAttribute method or if using jQuery $('#element').attr('data-account','void') Browser support: All except for IE. There is an IE hack around this at http://gist.github.com/362081. Special Note: Be AWARE, do not use upper-case when defining your data elements as it is all converted to lower-case when reading it, so: data-myAccount="A1234" will not be found when you read it with: element.dataset.myAccount Use only lowercase when reading so this will work: element.dataset.myaccount

    Read the article

  • OPN Diamond Level Criteria Update

    - by Cinzia Mascanzoni
    On June 1, 2013, the criteria for Oracle PartnerNetwork members to attain the prestigious Diamond level will change and all members at the Diamond level at that point will be required to meet the new criteria. This change underscores the requirement for these elite partners to engage across Oracle’s broad product portfolio. Refer to the Diamond Level Requirements on the OPN Portal here for more detail.

    Read the article

  • What is the difference between Workcenters, Dashboards, and the Interaction Hub?

    - by Matthew Haavisto
    Oracle Open World has just concluded.  Over the course of the conference, we presented several sessions covering different aspects of the PeopleSoft user experience, including Workcenters, Dashboards, and the PeopleSoft Interaction Hub (formerly known as the PeopleSoft Applications Portal).  Although we've produced collateral on these features and covered them in sessions, it became apparent at the conference that customers still have many questions about the these products, including how they are licensed, how they are installed, what their various purposes are, and how they can be used together synergistically. Let's Start with Licensing and Installation As you may know, we've extended the restricted use license (RUL) for the Interaction Hub.  This grants customers with PeopleTools 8.52 licenses the right to install the Interaction Hub for free for use as specified in the Tools license notes.  Note that this means customers receive a restricted use license for the Interaction Hub that doesn't cost them an additional license fee, but it is a separate product, not part of PeopleTools or PeopleSoft applications, and is a separate installation.  This means customers must provide the infrastructure to install and run the Hub, just like any other application.  The benefits of using the Hub to unify your PeopleSoft user experience can be great.  PeopleSoft applications have not yet delivered instances of the Hub with their products, though they may in the future. Workcenters and Dashboards, on the other hand, are frameworks provided by PeopleTools.  No other license is required, and no additional installation of a separate product is needed (apart from PeopleTools and PeopleSoft applications).  PeopleSoft applications are delivering instances of the workcenters and dashboards with their products.  Some are available now, and more are coming in future releases.  These delivered workcenter and dashboard instances require no additional licenses, and no additional installations beyond Tools and the applications that provide them.  In addition, the workcenter and dashboard frameworks provided by PeopleTools can be used by customers to build their own workcenters and dashboards, and it's quite easy and simple to do so. What are Their Differences?  What Purposes do they Serve? Workcenters, Dashboards and the Interaction Hub appear somewhat similar.  They all contain pagelets, and have some visual characteristics in common.  However, their strengths and purposes are very different, and they were designed to provide different benefits to your PeopleSoft ecosystem. Workcenters and Dashboards have the following characteristics: Designed for specific roles Focus on the daily tasks of those roles Help to streamline the work performed most often Personal view of my work world Makes navigation and search easier and quicker, particularly for transactions and decision support Reports and data needed for day-to-day work Personalizable, but minimal Delivered by PS Apps, but can be altered by customer for their requirements Customers can create their own Workcenters can be used for guided processes  The Interaction Hub is designed to aggregate content from multiple applications, and is is used to unify the user experience of those applications.  It offers a rich, web site-based user experience, and is often used to provide access to infrequently performed activities like benefits enrollment, payroll inquiries, life event changes, onboarding, and so on. Full-featured and robust Centrally administered Pushed to large audience Broad info like Company News Infrequent activities like benefits, not day-to-day tasks Self-service, access to employer info Central launch point for other activities and can navigate to workcenters and dashboards Deployed by customers or consultants, instances not delivered by PeopleSoft (at this time) Content management Unified PS application navigation Although these products are quite different and serve different purposes in your PeopleSoft environment, they can be used together to provide a richer, more efficient and engaging user experience for your all your user communities.

    Read the article

  • Configuring WS-Security with PeopleSoft Web Services

    - by Dave Bain
    I was speaking with a customer a few days ago about PeopleSoft Web Services.  The customer created a web service but when they went to deploy it, they had so many problems configuring ws-security, they pulled the service.  They spent several days trying to get it working but never got it working so they've put it on hold until they have time to work through the issues. Having gone through the process of configuring ws-security myself, I understand the complexity.  There is no magic 'easy' button to push.  If you are not familiar with all the moving parts like policies, certificates, public and private keys, credential stores, and so on, it can be a daunting task.  PeopleBooks documentation is good but does not offer a step-by-step example to follow.  Fear not, for those that want more help, there is a place to go. PeopleSoft released a Mobile Inventory Management application over a year ago.  It is a mobile app built with Oracle Fusion Application Development Framework (ADF) that accesses PeopleSoft content through standard web services.  Part of the installation of this app is configuring ws-security for the web services used in the application.  Appendix A of the PeopleSoft FSCM91 Mobile Inventory Management Installation Guide is called Configuring WS-Security for Mobile Inventory Management.  It is a step-by-step guide to configure ws-security between a server running Oracle Web Server Management (OWSM) and PeopleSoft Integration Broker.  Your environment might be different, but the steps will be similar, and on the PeopleSoft side, Integration Broker will remain a constant. You can find the installation guide on Oracle Suport.  Sign in to https://support.us.oracle.com and search for document 1290972.1.  Read through Appendix A for more details about how to set up ws-security with PeopleSoft web services.

    Read the article

  • Bahnbrechend und einsatzbereit: Oracle 12c In-Memory-Option Launch in Frankfurt

    - by Anne Manke
    Seit der Ankündigung der Oracle 12c In-Memory-Databankoption in San Francisco auf der Openworld im letzten Jahr, ist die DB Community gespannt, was diese bahnbrechende Technologie für Ad-hoc-Echtzeitanalysen von Live-Transaktionen, Data Warehousing, Reporting und Online Transaction Processing (OLTP) bringen wird. Die Messlatte liegt hoch, denn Larry Ellison verspricht mit der neuen 12c In-Memory-Option eine 100-fach schnellerer Verarbeitung von Abfragen bei Echtzeitanalysen für OLTP Prozesse oder Datawarehouses eine Verdoppelung der Transaktionsverarbeitung eine 100%ige Kompatibilität zu bestehenden Anwendungen Daten werden im Zeilenformat und Spaltenformat (In-Memory) abgelegt, und sind dabei aktiv und konsitstent Cloud-ready ohne Datamigration eine Ausweitung der In-Memory-basierten Abfrageprozesse auf mehrere Server    Um nur einige Features zu nennen >> mehr Infos finden Sie hier! Abfragen werden mit der neuen 12c In-Memory-Datenbankoption schneller bearbeitet, als die Anfrage gestellt werden kann, so Larry Ellison. Am 17. Juni 2014 wird die 12c In-Memory auf einer exklusiven Launch-Veranstaltung in Frankfurt am Main vorgestellt. Auf der Agenda stehen Vorträge, Diskussionen und eine LiveDemo der In-Memory-Datenbankoption.  Melden Sie sich jetzt an! Ort & Zeit: 17. Juni 2014, 9:30 - 15:15 Uhr in Radisson Blu Hotel (Franklinstrasse 65, 60486 Frankfurt am Main) Agenda 9:30 Registrierung 10:00 Begrüßung Guenther Stuerner, Vice President Sales Consulting, Oracle Deutschland (in deutscher Sprache) 10:15 Analystenvortrag Carl W. Olofson, Research Vice President, IDC (in englischer Sprache) 10:35 Keynote Andy Mendelsohn, Head of Database Development, Oracle (in englischer Sprache) 11:35 Podiumsdiskussion (in englischer Sprache): · Jens-Christian Pokolm, Postbank Systems AG · Andy Mendelsohn, Head of Database Development, Oracle · Carl W. Olofson, Research Vice President, IDC · Dr. Dietmar Neugebauer, Vorstandsvorsitzender, DOAG 12:30 Mittagessen 13:45 Oracle Database In Memory Option    Perform – Manage – Live Demo Ralf Durben, Senior Leitender Systemberater, Oracle Deutschland (in deutscher Sprache) 14:30 In Memory – Revolution for your DWH – Real Time Datawarehouse – Mixed Workloads – Live Demo – Live Data Query Alfred Schlaucher, Senior Leitender Systemberater, Oracle Deutschland (in deutscher Sprache) 15:15 Schlusswort & Networking

    Read the article

  • java.net.SocketException: Connection reset; No available router to destination

    - by adejuanc
    Sometimes a Weblogic Server will be unreachable, there could be several reasons, the server is hang, down, and thus not responding to a ping, or it could be related to a network issue.A possible point of failure in the network layer are firewalls. You should contact your network team if you have following messages: java weblogic.Admin -adminurl t3://adminServer.mydomain.com:7777 -username admin -password lockdown PINGFailed to connect to t3://adminServer.mydomain.com:7777: Destination unreachable; nested exception is:java.net.SocketException: Connection reset; No available router to destination A possible work around for the ping command is to use the HTTP protocol, instead of the t3 protocol. To enable this you must configure WebLogic to do HTTP tunneling. To enable, access the administration console, click servers-> server you want to reach-> protocols -> http -> enable Tunneling. no restart is necessary. And then, following command will work: java weblogic.Admin -adminurl http://adminServer.mydomain.com:7777 -username igmadmin -password l0ckdown PING

    Read the article

  • Working and Studying in Oracle, how I balance my time....

    - by anca.rosu
    Hi, my name is Laura. I am working as an Intern within Executive Administration at Oracle Denmark, whilst studying Information Management at Copenhagen Business school. I have recently handeding a paper on Information Systems which gave me exposure to Oracle. Once completing this paper I came across a job posting on my University’s intranet site and I applied directly online. When I submitted my application for the job offer, I wondered about what language I should use for the application form, as the job posting was in Danish, but the contact person and number looked Irish. I therefore chose English. Later that same day, Fiona, one of Oracle’s Graduates Recruitment Consultants based in Ireland, contacted me. This shows how global Oracle truly is. I went for my face-to-face interview in Oracle Denmark with Charlotte, one of the team managers. I spent 5 minutes waiting in the lobby, just looking around, thinking to myself, I really want to work here. The atmosphere seemed so pleasant with a relaxed approach between colleagues, employees and guests. The interview took about an hour, but we touched on a lot of different subjects. The profile I got of Oraclewas that this is a place where you are encouraged to think for yourself, and you are given the freedom to use your ideas. Later that evening, Fiona called and offered me the job. I was very happy. At Oracle Denmark we have 4 different zones: a Quiet Zone, a Project Zone, a Dialogue Zone and a Call Zone. Everyday when you arrive you consider what will be the most productive for the day’s task, and you take your toolbox and go find a desk in the zone you have decided on. It is therefore very unusual to be next to the same person two days in a row. At Oracle, people are located all over the world, and everybody has team members, colleagues or leaders in other countries, or even other time zones. Initially,I was worried about how I would adapt to this approach but I soon realized I had nothing to worry about and now I appreciate working this way. My colleagues have been very supportive and they have openly welcomed me into my new role. I typically work two days a week and have three days at University. During exam periods, I have the flexibility to work less hours and focus on the exams, in return for putting in more hours at work when needed. The first time I had to ask for time off before handing in a paper, my boss looked at me and said, ”Of course! Your education is the most important!” I hope that by sharing my experiences with you, I can inspire or encourage you to consider Oracle as a potential employer, where you can grow both professionally and personally. If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com Technorati Tags: Intern,Oracle Denmark,Information Systems,Business school,Copenhagen,Graduates Recruitment,Ireland,Quiet Zone,Project Zone,Dialogue Zone,Call Zone,University,flexibility

    Read the article

  • What's up with LDoms: Eine Artikel-Reihe zum Thema Oracle VM Server for SPARC

    - by Stefan Hinker
    Unter dem Titel "What's up with LDoms" habe ich soeben den ersten Artikel einer ganzen Reihe veroeffentlicht.  Ziel der Artikelreihe ist es, das vollstaendige Feature-Set der LDoms zu betrachten.  Da das ganze recht umfangreich ist, werde ich hier von der Zweisprachigkeit abweichen und die Artikel ausschliesslich auf Englisch verfassen.  Ich bitte hierfuer um Verstaendnis. Den ersten Artikel gibt es hier: What's up with LDoms: Part 1 - Introduction & Basic Concepts Viel Spass beim Lesen!

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >