Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 551/618 | < Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >

  • Have Your Cake and Eat it Too: Industry Best Practices + Flexibility

    - by Oracle Accelerate for Midsize Companies
    By Richard Garraputa, VP of Sales & Marketing, brij Richard joined brij in 1996 after graduating from the University of North Carolina at Greensboro with degrees in Information Systems and Accounting. He directs brij’s overall strategies of both the business development and marketing departments. Companies looking for new ERP systems spend so much time comparing features and functions of software products but too often short change the value of their own processes.  Company managers I meet often claim that they are implementing a new ERP system so they can perform better and faster.  When asked how, the answer is often “by implementing best practices”.  But the term ‘best practices’ is frequently used to mean ‘doing things the way everyone else does them’ rather than a starting point or benchmark to build upon by adding your own value. Of course, implementing standardized processes across an enterprise is an important step in improving operational efficiencies.  But not all companies are alike.  Do you ever tell your customers “We are just like our competition and have no competitive differentiation”?  Probably not.  So why should the implementation of your business processes be just like your competitor’s?  Even within the same industry, companies differentiate themselves by leveraging their unique expertise and approach to business.  These unique aspects—the competitive differentiators that companies use to thrive in a crowded marketplace—can and should be supported by the implementation of business systems like ERP. Modern ERP systems like Oracle’s JD Edwards EnterpriseOne have a broad and deep functional footprint designed to integrate a company’s core operations.  But how can a company take advantage of this footprint without blowing up their implementation budget?  Some ERP vendors claim to solve this challenge by stating that their systems come pre-configured with ‘best practices’.  Too often what they are really saying is that you will have to abandon your key operational differentiators to fit a vendor’s template for your business—or extend your implementation and postpone the realization of any benefits. Thankfully for midsize companies, there is an alternative to the undesirable options of extended implementation projects or abandoning their competitive differentiators.  Oracle Accelerate Solutions speed the time it takes to implement JD Edwards EnterpriseOne solution based on your unique business characteristics, getting your new ERP system up and running faster without forcing your business to fit a cookie-cutter solution. We’ve been a JD Edwards implementation partner since 1986 and we now leverage Oracle Business Accelerators—cloud based rapid implementation tools built and maintained by Oracle. Oracle Business Accelerators deliver the benefits of embedded industry best practices without forcing every customer in to one set of processes like many template or “clone and go” approaches do. You retain the ability to reconfigure your applications—without customization—as your business changes. Wielded by Oracle partners with industry-specific domain expertise, Oracle Accelerate Solution implementations powered by Oracle Business Accelerators help automate the application configuration to fit your business better, faster. For example, on a recent project at a manufacturing company, the project manager told me that Oracle Business Accelerators helped get them to Conference Room Pilot 20% faster than with a traditional approach. Time savings equal cost savings. And if ‘better and faster’ is your goal for your business performance, shouldn’t it be the goal for your ERP implementation as well? Established in 1986, brij has been dedicated solely to helping its customers implement Oracle’s JD Edwards solutions and to maximize the value of those customers’ IT investments. They are a Gold level member in Oracle PartnerNetwork and an Oracle Accelerate Solution provider.

    Read the article

  • JCP Awards 10 Year Retrospective

    - by Heather VanCura
    As we celebrate 10 years of JCP Program Award recognition in 2012,  take a look back in the Retrospective article covering the history of the JCP awards.  Most recently, the JCP awards were  celebrated at JavaOne Latin America in Brazil, where SouJava was presented the JCP Member of the Year Award for 2012 (won jointly with the London Java Community) for their contributions and launch of the Global Adopt-a-JSR Program. This is also a good time to honor the JCP Award Nominees and Winners who have been designated as Star Spec Leads.  Spec Leads are key to the Java Community Process (JCP) program. Without them, none of the Java Specification Requests (JSRs) would have begun, much less completed and become implemented in shipping products.  Nominations for 2012 Start Spec Leads are now open until 31 December. The Star Spec Lead program recognizes Spec Leads who have repeatedly proven their merit by producing high quality specifications, establishing best practices, and mentoring others. The point of such honor is to endorse the good work that they do, showcase their methods for other Spec Leads to emulate, and motivate other JCP program members and participants to get involved in the JCP program. Ed Burns – A Star Spec Lead for 2009, Ed first got involved with the JCP program when he became co-Spec Lead of JSR 127, JavaServer Faces (JSF), a role he has continued through JSF 1.2 and now JSF 2.0, which is JSR 314. Linda DeMichiel – Linda thus involved in the JCP program from its very early days. She has been the Spec Lead on at least three JSRs and an EC member for another three. She holds a Ph.D. in Computer Science from Stanford University. Gavin King – Nominated as a JCP Outstanding Spec Lead for 2010, for his work with JSR 299. His endorsement said, “He was not only able to work through disputes and objections to the evolving programming model, but he resolved them into solutions that were more technically sound, and which gained support of its pundits.” Mike Milikich –  Nominated for his work on Java Micro Edition (ME) standards, implementations, tools, and Technology Compatibility Kits (TCKs), Mike was a 2009 Star Spec Lead for JSR 271, Mobile Information Device Profile 3. David Nuescheler – Serving as the CTO for Day Software, acquired by Adobe Systems, David has been a key player in the growth of the company’s global content management solution. In 2002, he became Spec Lead for JSR 170, Content Repository for Java Technology API, continuing for the subsequent version, JSR 283. Bill Shannon – A well-respected name in the Java community, Bill came to Oracle from Sun as a Distinguished Engineer and is still performing at full speed as Spec Lead for JSR 342, Java EE 7,  as an alternate EC member, and hands-on problem solver for the Java community as a whole. Jim Van Peursem – Jim holds a PhD in Computer Engineering. He was part of the Motorola team that worked with Sun labs on the Spotless VM that became the KVM. From within Motorola, Jim has been responsible for many aspects of Java technology deployment, from an independent Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP) implementations, to handset development, to working with the industry in defining many related standards. Participation in the JCP Program goes well beyond technical proficiency. The JCP Awards Program is an attempt to say “Thank You” to all of the JCP members, Expert Group Members, Spec Leads, and EC members who give their time to contribute to the evolution of Java technology.

    Read the article

  • Oracle is #1 in Life Sciences!

    - by Michael Snow
    Guest post today by: John Klinke, Senior Principal Product Manager, Oracle WebCenter Content 12.00 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Based on the announcement last week at EMC World about Documentum for Life Sciences, it looks like EMC is starting to have regrets about pulling out of the life sciences space over the last few years. Certainly their content management customers and partners in life sciences have noticed their retreat. Many of them are now talking to us about WebCenter Content since they’ve seen the writing on the wall regarding Documentum’s decline, including falling revenue, shrinking investment, departure of key executives, and EMC’s auditing of existing customers. While EMC has been neglecting the life sciences industry over the last few years, Oracle has been increasing its investment and commitment by providing best-of-breed solutions to enable pharmaceutical, medical device, biotech and CRO companies to improve productivity and drive innovation. As a result, according to IDC Health Insights, Oracle is #1 in life sciences. From research and development through clinical development and manufacturing to sales and marketing, Oracle provides the solutions that life sciences companies depend on to accelerate R&D, expedite clinical trials, and speed time to market. Specifically for Oracle WebCenter, our life sciences business is booming thanks to our comprehensive offerings led by Oracle WebCenter Content, our 21 CFR Part 11 compliant enterprise content management platform. Unlike Documentum, WebCenter Content is all about keeping the cost of ownership low - through simplicity, flexibility, and out-of-the-box integrations. WebCenter Content is a single, comprehensive ECM platform that can handle all your content management needs, from controlled documents to digital asset management, records management and document imaging and capture. And it is much more flexible, letting you do configuration changes instead of customizations to meet your business needs. It also saves you money by being pre-integrated with the rest of the Oracle Fusion Middleware technology stack and with leading enterprise applications like Siebel (including Siebel CTMS), Primavera, E-Business Suite, JD Edwards and PeopleSoft. So if you think EMC’s announcement last week was too little and too late, I’m happy to report that Oracle is here to help. Back in October, we announced our Move Off Documentum offer, which provides a 100% trade-in credit for your Documentum licenses when you purchase Oracle WebCenter, and the good news is, this offer is still available for a limited time. So stop maintaining Documentum and start innovating with Oracle WebCenter. For more details see www.oracle.com/moveoff/documentum.

    Read the article

  • JavaOne Latin America 2012 Trip Report

    - by reza_rahman
    JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. The conference was a resounding success with a great vibe, excellent technical content and numerous world class speakers. Some notable local and international speakers included Bruno Souza, Yara Senger, Mattias Karlsson, Vinicius Senger, Heather Vancura, Tori Wieldt, Arun Gupta, Jim Weaver, Stephen Chin, Simon Ritter and Henrik Stahl. Topics covered included the JCP/JUGs, Java SE 7, HTML 5/WebSocket, CDI, Java EE 6, Java EE 7, JSF 2.2, JMS 2, JAX-RS 2, Arquillian and JavaFX. Bruno Borges and I manned the GlassFish booth at the Java Pavilion on Tuesday and Webnesday. The booth traffic was decent and not too hectic. We met a number of GlassFish adopters including perhaps one of the largest GlassFish deployments in Brazil as well as some folks migrating to Java EE from Spring. We invited them to share their stories with us. We also talked with some key members of the local Java community. Tuesday evening we had the GlassFish party at the Tribeca Pub. The party was definitely a hit and we could have used a larger venue (this was the first time we had the GlassFish party in Brazil). Along with GlassFish enthusiasts, a number of Java community leaders were there. We met some of the same folks again at the JUG leader's party on Wednesday evening. On Thursday Arun Gupta, Bruno Borges and I ran a hands-on-lab on JAX-RS, WebSocket and Server-Sent Events (SSE) titled "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". This is the same Java EE 7 lab run at JavaOne San Francisco. The lab provides developers a first hand glipse of how an HTML 5 powered Java EE application might look like. We had an overflow crowd for the lab (at one point we had about twenty people standing) and the lab went very well. The slides for the lab are here: Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket from Reza Rahman The actual contents for the lab is available here. Give me a shout if you need help getting it up and running. I gave two solo talks following the lab. The first was on JMS 2 titled "What’s New in Java Message Service 2". This was essentially the same talk given by JMS 2 specification lead Nigel Deakin at JavaOne San Francisco. I talked about the JMS 2 simplified API, JMSContext injection, delivery delays, asynchronous send, JMS resource definition in Java EE 7, standardized configuration for JMS MDBs in EJB 3.2, mandatory JCA pluggability and the like. The session went very well, there was good Q & A and someone even told me this was the best session of the conference! The slides for the talk are here: What’s New in Java Message Service 2 from Reza Rahman My last talk for the conference was on JAX-RS 2 in the keynote hall. Titled "JAX-RS 2: New and Noteworthy in the RESTful Web Services API" this was basically the same talk given by the specification leads Santiago Pericas-Geertsen and Marek Potociar at JavaOne San Francisco. I talked about the JAX-RS 2 client API, asyncronous processing, filters/interceptors, hypermedia support, server-side content negotiation and the like. The talk went very well and I got a few very kind complements afterwards. The slides for the talk are here: JAX-RS 2: New and Noteworthy in the RESTful Web Services API from Reza Rahman On a more personal note, Sao Paulo has always had a special place in my heart as the incubating city for Sepultura and Soulfy -- two of my most favorite heavy metal musical groups of all time! Consequently, the city has a perpertually alive and kicking metal scene pretty much any given day of the week. This time I got to check out a solid performance by local metal gig Republica at the legendary Manifesto Bar. I also wanted to see a Dio Tribute at the Blackmore but ran out of time and energy... Overall I enjoyed the conference/Sao Paulo and look forward to going to Brazil again next year!

    Read the article

  • NetBeans Java Hints: Quick & Dirty Guide

    - by Geertjan
    In NetBeans IDE 7.2, a new wizard will be found in the "Module Development" category in the New File dialog, for creating new Java Hints. Select a package in a NetBeans module project. Right click, choose New/Other.../Module Development/Java Hint: You'll then see this: Fill in: Class Name: the name of the class that should be generated. E.g. "Example". Hint Display Name: the display name of the hint itself (as will appear in Tools/Options). E.g. "Example Hint". Warning Message: the warning that should be produced by the hint. E.g. "Something wrong is going on". Hint Description: a longer description of the hint, will appear in Tools/Options and eventually some other places. E.g. "This is an example hint that warns about an example problem." Will also provide an Automatic Fix: whether the hint will provide some kind of transformation. E.g. "yes". Fix Display Name: the display name of such a fix/transformation. E.g. "Fix the problem". Click Finish. Should generate "Example.java", the hint itself: import com.sun.source.util.TreePath; import org.netbeans.api.java.source.CompilationInfo; import org.netbeans.spi.editor.hints.ErrorDescription; import org.netbeans.spi.editor.hints.Fix; import org.netbeans.spi.java.hints.ConstraintVariableType; import org.netbeans.spi.java.hints.ErrorDescriptionFactory; import org.netbeans.spi.java.hints.Hint; import org.netbeans.spi.java.hints.HintContext; import org.netbeans.spi.java.hints.JavaFix; import org.netbeans.spi.java.hints.TriggerPattern; import org.openide.util.NbBundle.Messages; @Hint(displayName = "DN_com.bla.Example", description = "DESC_com.bla.Example", category = "general") //NOI18N @Messages({"DN_com.bla.Example=Example Hint", "DESC_com.bla.Example=This is an example hint that warns about an example problem."}) public class Example { @TriggerPattern(value = "$str.equals(\"\")", //Specify a pattern as needed constraints = @ConstraintVariableType(variable = "$str", type = "java.lang.String")) @Messages("ERR_com.bla.Example=Something wrong is going on") public static ErrorDescription computeWarning(HintContext ctx) { Fix fix = new FixImpl(ctx.getInfo(), ctx.getPath()).toEditorFix(); return ErrorDescriptionFactory.forName(ctx, ctx.getPath(), Bundle.ERR_com.bla_Example(), fix); } private static final class FixImpl extends JavaFix { public FixImpl(CompilationInfo info, TreePath tp) { super(info, tp); } @Override @Messages("FIX_com.bla.Example=Fix the problem") protected String getText() { return Bundle.FIX_com_bla_Example(); } @Override protected void performRewrite(TransformationContext ctx) { //perform the required transformation } } } Should also generate "ExampleTest.java", a test for it. Unfortunately, the wizard infrastructure is not capable of handling changes related to test dependencies. So the ExampleTest.java has a todo list at its begining: /* TODO to make this test work:  - add test dependency on Java Hints Test API (and JUnit 4)  - to ensure that the newest Java language features supported by the IDE are available,   regardless of which JDK you build the module with:  -- for Ant-based modules, add "requires.nb.javac=true" into nbproject/project.properties  -- for Maven-based modules, use dependency:copy in validate phase to create   target/endorsed/org-netbeans-libs-javacapi-*.jar and add to endorseddirs   in maven-compiler-plugin configuration  */Warning: if this is a project for which tests never existed before, you may need to close&reopen the project, so that "Unit Test Libraries" node appears - a bug in apisupport projects, as far as I can tell.  Thanks to Jan Lahoda for the above rough guide.

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • How to Make the Gnome Panels in Ubuntu Totally Transparent

    - by The Geek
    We all love transparency, since it makes your desktop so beautiful and lovely—so today we’re going to show you how to apply transparency to the panels in your Ubuntu Gnome setup. It’s an easy process, and here’s how to do it. This article is the first part of a multi-part series on how to customize the Ubuntu desktop, written by How-To Geek reader and ubergeek, Omar Hafiz. Making the Gnome Panels Transparent Of course we all love transparency, It makes your desktop so beautiful and lovely. So you go for enabling transparency in your panels , you right click on your panel, choose properties, go to the Background tab and make your panel transparent. Easy right? But instead of getting a lovely transparent panel, you often get a cluttered, ugly panel like this: Fortunately it can be easily fixed, all we need to do is to edit the theme files. If your theme is one of those themes that came with Ubuntu like Ambiance then you’ll have to copy it from /usr/share/themes to your own .themes directory in your Home Folder. You can do so by typing the following command in the terminal cp /usr/share/themes/theme_name ~/.themes Note: don’t forget to substitute theme_name with the theme name you want to fix. But if your theme is one you downloaded then it is already in your .themes folder. Now open your file manager and navigate to your home folder then do to .themes folder. If you can’t see it then you probably have disabled the “View hidden files” option. Press Ctrl+H to enable it. Now in .themes you’ll find your previously copied theme folder there, enter it then go to gtk-2.0 folder. There you may find a file named “panel.rc”, which is a configuration file that tells your panel how it should look like. If you find it there then rename it to “panel.rc.bak”. If you don’t find don’t panic! There’s nothing wrong with your system, it’s just that your theme decided to put the panel configurations in the “gtkrc” file. Open this file with your favorite text editor and at the end of the file there is line that looks like this “include “apps/gnome-panel.rc””. Comment out this line by putting a hash mark # in front of it. Now it should look like this “# include “apps/gnome-panel.rc”” Save and exit the text editor. Now change your theme to any other one then switch back to the one you edited. Now your panel should look like this: Stay tuned for the second part in the series, where we’ll cover how to change the color and fonts on your panels. Latest Features How-To Geek ETC How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The Legend of Zelda – 1980s High School Style [Video] Suspended Sentence is a Free Cross-Platform Point and Click Game Build a Batman-Style Hidden Bust Switch Make Your Clock Creates a Custom Clock for your Android Homescreen Download the Anime Angels Theme for Windows 7 CyanogenMod Updates; Rolls out Android 2.3 to the Less Fortunate

    Read the article

  • Resolve SRs Faster Using RDA - Find the Right Profile

    - by Daniel Mortimer
    Introduction Remote Diagnostic Agent (RDA) is an excellent command-line data collection tool that can aid troubleshooting / problem solving. The tool covers the majority of Oracle's vast product range, and its data collection capability is comprehensive. RDA collects data about the operating system and environment, including environment variable, kernel settings network o/s performance o/s patches and much more the Oracle Products installed, including patches logs and debug metrics configuration and much more In effect, RDA can obtain a snapshot of an Oracle Product and its environment. Oracle Support encourages the use of RDA because it greatly reduces service request resolution time by minimizing the number of requests from Oracle Support for more information. RDA is designed to be as unobtrusive as possible; it does not modify systems in any way. It collects useful data for Oracle Support only and a security filter is provided if required. Find and Use the Right RDA Profile One problem of any tool / utility, which covers a large range of products, is knowing how to target it against only the products you wish to troubleshoot. RDA does not have a GUI. Nor does RDA have an intelligent mechanism for detecting and automatically collecting data only for those Oracle products installed. Instead, you have to tell RDA what to do. There is a mind boggling large number of RDA data collection modules which you can configure RDA to use. It is easier, however, to setup RDA to use a "Profile". A profile consists of a list of data collection modules and predefined settings. As such profiles can be used to diagnose a problem with a particular product or combination of products. How to run RDA with a profile? ( <rda> represents the command you selected to run RDA (for example, rda.pl, rda.cmd, rda.sh, and perl rda.pl).) 1. Use the embedded spreadsheet to find the RDA profile which is appropriate for your problem / chosen Oracle Fusion Middleware products. 2. Use the following command to perform the setup <rda> -S -p <profile_name>  3. Run the data collection <rda> Run the data collection. If you want to perform setup and run in one go, then use a command such as the following: <rda> -vnSCRP -p <profile name> For more information, refer to: Remote Diagnostic Agent (RDA) 4 - Profile Manual Pages [ID 391983.1] Additional Hints / Tips: 1. Be careful! Profile names are case sensitive.2. When profiles are not used, RDA considers all existing modules by default. For example, if you have downloaded RDA for the first time and run the command <rda> -S you will see prompts for every RDA collection module many of which will be of no interest to you. Also, you may, in your haste to work through all the questions, forget to say "Yes" to the collection of data that is pertinent to your particular problem or product. Profiles avoid such tedium and help ensure the right data is collected at the first time of asking.

    Read the article

  • CacheAdapter 2.4 – Bug fixes and minor functional update

    - by Glav
    Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here.. The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here. Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though. Bugs This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety. The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient. There are a few other minor bug fixes, code cleanup and a little refactoring. Minor feature addition In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below: <Glav.CacheAdapter.MainConfig> <setting name="CacheToUse" serializeAs="String"> <value>memcached</value> </setting> <setting name="DistributedCacheServers" serializeAs="String"> <value>localhost:11211</value> </setting> <setting name="IsCacheEnabled" serializeAs="String"> <value>False</value> </setting> </Glav.CacheAdapter.MainConfig> Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func<T> syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func<T> code will be executed every time: var data1 = cacheProvider.Get<SomeData>("cache-key", DateTime.Now.AddHours(1), () => { // With the cache disabled, this data access code is executed every attempt to // get this data via the CacheProvider. var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 }; return someData; }); One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting. Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • October 2013 Oracle University Round-Up: New Training & Certifications

    - by Breanne Cooley
    Here are the highlights of what is happening this month at Oracle University.  New Technology Overview Courses: Cloud, Big Data and Security Learn about the latest technology solutions that can transform your business. These three Training On Demand courses are taught by industry experts. These courses help you develop an understanding of how Oracle technologies can make a positive impact on your organization.  Oracle Cloud Overview  Oracle Big Data Overview Oracle Security Overview  New Cloud Application Foundation Courses Check out our brand new 12c courses for WebLogic Server administrators and Coherence developers:  Oracle WebLogic Server 12c: Administration I Oracle WebLogic Server 12c: Administration II Oracle Coherence 12c: New Features  Oracle Database 12c Courses Our Oracle Database 12c training is becoming very popular. Here are this month's featured courses:  Oracle Database 12c: New Features for Administrators Oracle Database 12c: Administration Workshop  Oracle Database 12c: Install and Upgrade Workshop Oracle Database 12c: Admin, Install and Upgrade Accelerated  Validate your expertise and add value by earning an Oracle Database 12c Certification.  New Certifications for MySQL Watch our two new videos to find out what's new with Oracle MySQL Certifications. 1) Oracle MySQL 5.6 Certification: What's New for Database Administrators  Recommended training:  MySQL for Beginners MySQL for Database Administrators  2) Oracle MySQL 5.6 Certification: What's New for Developers Recommended training:  MySQL for Beginners MySQL for Developers New Training & Certification for Oracle Applications JD Edwards 9.1 Training Additional JD Edwards Enterprise One 9.1 training is now available for administrators, developers and implementation team members. Cross Application Training  JD Edwards Enterprise One Common Foundation Rel 9.x  Human Capital Management Training  JD Edwards EnterpriseOne Payroll for Canada Rel 9.x JD Edwards EnterpriseOne Payroll for US Rel 9.x JD Edwards EnterpriseOne Payroll Accelerated for Canada Rel 9.x JD Edwards EnterpriseOne Payroll Accelerated for US Rel 9.x  Financial  Management Training  JD Edwards EnterpriseOne Accounts Receivable Rel 9.x JD Edwards EnterpriseOne Financial Report Writing Rel 9.x  Knowledge Management 8.5 Training Oracle Knowledge 8.5 training is now available for analysts interested in learning how to quickly spot trends in content processing and system usage with analytics dashboards. Knowledge Analytics Rel 8.5  Taleo Training Updated Taleo training is now available. Taleo Business Edition (TEE) business users can learn how to create more efficient reports. Recruiters will learn how to efficiently and effectively use Taleo Business Edition (TBE) Recruit.  Taleo (TEE): Advanced Reporting Taleo (TBE): Recruit - End User Fundamentals  New Training for Oracle Retail 13.4.1 Updated training for Retail Predictive Application Server and Retail Demand Forecasting is now available.  RPAS Administration and Configuration Fundamentals RPAS Technical Essentials: Fusion Client 13.4.1 Retail Demand Forecasting (RDF) Business Essentials 13.4.1  View all available training courses, learning paths and certifications at education.oracle.com, or contact your local education representative to learn more about Oracle University's education solutions. See you in class!  -Oracle University Marketing Team 

    Read the article

  • VirtualBox 3.2 is released! A Red Letter Day?

    - by Fat Bloke
    Big news today! A new release of VirtualBox packed full of innovation and improvements. Over the next few weeks we'll take a closer look at some of these new features in a lot more depth, but today we'll whet your appetite with the headline descriptions. To start with, we should point out that this is the first Oracle-branded version which makes today a real Red-letter day ;-)  Oracle VM VirtualBox 3.2 Version 3.2 moves VirtualBox forward in 3 main areas ( handily, all beginning with "P" ) : performance, power and supported guest operating system platforms.  Let's take a look: Performance New Latest Intel hardware support - Harnessing the latest in chip-level support for virtualization, VirtualBox 3.2 supports new Intel Core i5 and i7 processor and Intel Xeon processor 5600 Series support for Unrestricted Guest Execution bringing faster boot times for everything from Windows to Solaris guests; New Large Page support - Reducing the size and overhead of key system resources, Large Page support delivers increased performance by enabling faster lookups and shorter table creation times. New In-hypervisor Networking - Significant optimization of the networking subsystem has reduced context switching between guests and host, increasing network throughput by up to 25%. New New Storage I/O subsystem - VirtualBox 3.2 offers a completely re-worked virtual disk subsystem which utilizes asynchronous I/O to achieve high-performance whilst maintaining high data integrity; New Remote Video Acceleration - The unique built-in VirtualBox Remote Display Protocol (VRDP), which is primarily used in virtual desktop infrastructure deployments, has been enhanced to deliver video acceleration. This delivers a rich user experience coupled with reduced computational expense, which is vital when servers are running hundreds of virtual machines; Power New Page Fusion - Traditional Page Sharing techniques have suffered from long and expensive cache construction as pages are scrutinized as candidates for de-duplication. Taking a smarter approach, VirtualBox Page Fusion uses intelligence in the guest virtual machine to determine much more rapidly and accurately those pages which can be eliminated thereby increasing the capacity or vm density of the system; New Memory Ballooning- Ballooning provides another method to increase vm density by allowing the memory of one guest to be recouped and made available to others; New Multiple Virtual Monitors - VirtualBox 3.2 now supports multi-headed virtual machines with up to 8 virtual monitors attached to a guest. Each virtual monitor can be a host window, or be mapped to the hosts physical monitors; New Hot-plug CPU's - Modern operating systems such Windows Server 2008 x64 Data Center Edition or the latest Linux server platforms allow CPUs to be dynamically inserted into a system to provide incremental computing power while the system is running. Version 3.2 introduces support for Hot-plug vCPUs, allowing VirtualBox virtual machines to be given more power, with zero-downtime of the guest; New Virtual SAS Controller - VirtualBox 3.2 now offers a virtual SAS controller, enabling it to run the most demanding of high-end guests; New Online Snapshot Merging - Snapshots are powerful but can eat up disk space and need to be pruned from time to time. Historically, machines have needed to be turned off to delete or merge snapshots but with VirtualBox 3.2 this operation can be done whilst the machines are running. This allows sophisticated system management with minimal interruption of operations; New OVF Enhancements - VirtualBox has supported the OVF standard for virtual machine portability for some time. Now with 3.2, VirtualBox specific configuration data is also stored in the standard allowing richer virtual machine definitions without compromising portability; New Guest Automation - The Guest Automation APIs allow host-based logic to drive operations in the guest; Platforms New USB Keyboard and Mouse - Support more guests that require USB input devices; New Oracle Enterprise Linux 5.5 - Support for the latest version of Oracle's flagship Linux platform; New Ubuntu 10.04 ("Lucid Lynx") - Support for both the desktop and server version of the popular Ubuntu Linux distribution; And as a man once said, "just one more thing" ... New Mac OS X (experimental) - On Apple hardware only, support for creating virtual machines run Mac OS X. All in all this is a pretty powerful release packed full of innovation and speedups. So what are you waiting for?  -FB 

    Read the article

  • Enterprise Manager in EPM 11.1.2.x...a game of hide and seek!

    - by THE
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} guest article: Maurice Bauhahn: Users of Oracle Hyperion Enterprise Performance Management 11.1.2.0 and 11.1.2.1 may puzzle why the URL http://<servername>:7001/em may not conjure up Enterprise Manager Fusion Middleware Control. This powerful tool has been installed by default...but WebLogic may not have been 'Extended' to allow you to call it up (we are hopeful this ‘Extend’  step will not be needed with 11.1.2.2). The explanation is on pages 425 and following of the following document: http://www.oracle.com/technetwork/middleware/bi-foundation/epm-tips-issues-1-72-427329.pdf A close look at the screen dumps in that section reveals a somewhat scary prospect, however: the non-AdminServer servlets had all failed (see the red down-arrow icons to the right of their names) after the configuration! Of course you would want to avoid that scenario! A rephrasing of the instructions might help: Ensure the WebLogic AdminServer is not running (in a default scenario that would mean port 7001 is not active). Ensure you have logged into the computer as the installing owner of EPM. Since Enterprise Manager uses a LOT of resources, be sure that there is adequate free RAM to accommodate the added load. On the machine where WebLogic AdminServer is set up (typically the Foundation Services machine), run \Oracle\Middleware\wlserver_10.3\common\bin\config (config.sh on Unix). Select the 'Extend an existing WebLogic domain' option, and click the 'Next' button. Select the domain being used by EPM System. - Typically, the default domain is created under /Oracle/Middleware/user_projects/domains and is named EPMSystem. - Click 'Next'. Under 'Extend my domain automatically to support the following added products' - place a check mark before 'Oracle Enterprise Manager - 11.1.1.0 [oracle_common]' to select it. - Continue accepting the defaults by clicking 'Next' on each page until - on the last page you click 'Extend'. - The system will grind for a few minutes while it configures (deploys?) EM. - Start the AdminServer. Sometimes there is contention in the startup order of the various servlets (resulting in some not coming up). To avoid that problem on Microsoft Windows machines you may start and stop services via the following analogous command line commands to those run on Linux/Unix (these more carefully space out timings of these events): Ensure EPM is up:\Oracle\Middleware\user_projects\epmsystem1\bin\start.bat Ensure WebLogic is up:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\startWebLogic.cmd Shut down WebLogic:\Oracle\Middleware\user_projects\domains\EPMSystem\bin\stopWebLogic.cmd Shut down EPM:\Oracle\Middleware\user_projects\epmsystem1\bin\stop.bat  Now you should be able to more successfully troubleshoot with the EM tool:

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • Create a Remote Git Repository from an Existing XCode Repository

    - by codeWithoutFear
    Introduction Distributed version control systems (VCS’s), like Git, provide a rich set of features for managing source code.  Many development tools, including XCode, provide built-in support for various VCS’s.  These tools provide simple configuration with limited customization to get you up and running quickly while still providing the safety net of basic version control. I hate losing (and re-doing) work.  I have OCD when it comes to saving and versioning source code.  Save early, save often, and commit to the VCS often.  I also hate merging code.  Smaller and more frequent commits enable me to minimize merge time and effort as well. The work flow I prefer even for personal exploratory projects is: Make small local changes to the codebase to create an incrementally improved (and working) system. Commit these changes to the local repository.  Local repositories are quick to access, function even while offline, and provides the confidence to continue making bold changes to the system.  After all, I can easily recover to a recent working state. Repeat 1 & 2 until the codebase contains “significant” functionality and I have connectivity to the remote repository. Push the accumulated changes to the remote repository.  The smaller the change set, the less likely extensive merging will be required.  Smaller is better, IMHO. The remote repository typically has a greater degree of fault tolerance and active management dedicated to it.  This can be as simple as a network share that is backed up nightly or as complex as dedicated hardware with specialized server-side processing and significant administrative monitoring. XCode’s out-of-the-box Git integration enables steps 1 and 2 above.  Time Machine backups of the local repository add an additional degree of fault tolerance, but do not support collaboration or take advantage of managed infrastructure such as on-premises or cloud-based storage. Creating a Remote Repository These are the steps I use to enable the full workflow identified above.  For simplicity the “remote” repository is created on the local file system.  This location could easily be on a mounted network volume. Create a Test Project My project is called HelloGit and is located at /Users/Don/Dev/HelloGit.  Be sure to commit all outstanding changes.  XCode always leaves a single changed file for me after the project is created and the initial commit is submitted. Clone the Local Repository We want to clone the XCode-created Git repository to the location where the remote repository will reside.  In this case it will be /Users/Don/Dev/RemoteHelloGit. Open the Terminal application. Clone the local repository to the remote repository location: git clone /Users/Don/Dev/HelloGit /Users/Don/Dev/RemoteHelloGit Convert the Remote Repository to a Bare Repository The remote repository only needs to contain the Git database.  It does not need a checked out branch or local files. Go to the remote repository folder: cd /Users/Don/Dev/RemoteHelloGit Indicate the repository is “bare”: git config --bool core.bare true Remove files, leaving the .git folder: rm -R * Remove the “origin” remote: git remote rm origin Configure the Local Repository The local repository should reference the remote repository.  The remote name “origin” is used by convention to indicate the originating repository.  This is set automatically when a repository is cloned.  We will use the “origin” name here to reflect that relationship. Go to the local repository folder: cd /Users/Don/Dev/HelloGit Add the remote: git remote add origin /Users/Don/Dev/RemoteHelloGit Test Connectivity Any changes made to the local Git repository can be pushed to the remote repository subject to the merging rules Git enforces. Create a new local file: date > date.txt /li> Add the new file to the local index: git add date.txt Commit the change to the local repository: git commit -m "New file: date.txt" Push the change to the remote repository: git push origin master Now you can save, commit, and push/pull to your OCD hearts’ content! Code without fear! --Don

    Read the article

  • Book Review: Oracle ADF 11gR2 Development Beginner's Guide

    - by Grant Ronald
    Packt Publishing asked me to review Oracle ADF 11gR2 Development Beginner's Guide by Vinod Krishnan, so on a couple of long flights I managed to get through the book in a couple of sittings. One point to make clear before I go into the review.  Having authored "The Quick Start Guide to Fusion Development: JDeveloper and Oracle ADF", I've written a book which covers the same topic/beginner level.  I also think that its worth stating up front that I applaud anyone who has gone  through the effort of writing a technical book. So well done Vinod.  But on to the review: The book itself is a good break down of topic areas.  Vinod starts with a quick tour around the IDE, which is an important step given all the work you do will be through the IDE.  The book then goes through the general path that I tend to always teach: a quick overview demo, ADF BC, validation, binding, UI, task flows and then the various "add on" topics like security, MDS and advanced topics.  So it covers the right topics in, IMO, the right order.  I also think the writing style flows nicely as well - Its a relatively easy book to read, it doesn't get too formal and the "Have a go hero" hands on sections will be useful for many. That said, I did pick out a number of styles/themes to the writing that I found went against the idea of a beginners guide.  For example, in writing my book, I tried to carefully avoid talking about topics not yet covered or not yet relevant at that point in someone's learning.  So, if I was a new ADF developer reading this book, did I really need to know about ADFBindingFilter and DataBindings.cpx file on page 58 - I've only just learned how to do a drag and drop simple application so showing me XML configuration files relevant to JSF/ADF lifecycle is probably going to scare me off! I found this in a couple of places, for example, the security chapter starts on page 219 but by page 222 (and most of the preceding pages are hands-on steps) we're diving into the web.xml, weblogic.xml, adf-config.xml, jsp-config.xml and jazn-data.xml.  Don't get me wrong, I'm not saying you shouldn't know this, but I feel you have to get people on a strong grounding of the concepts before showing them implementation files.  If having just learned what ADF Security is will "The initialization parameter remove.anonymous.role is set to false for the JpsFilter filter as this filter is the first filter defined in the file" really going to help me? The other theme I found which I felt didn't work was that a couple of the chapters descended into a reference guide.  For example page 159 onwards basically lists UI components and their properties.  And page 87 onwards list the attributes of ADF BC in pretty much the same way as the on line help or developer guide, and I've a personal aversion to any sort of help that says pretty much what the attribute name is e.g. "Precision Rule: this option is used to set a strict precision rule", or "Property Set: this is the property set that has to be applied to the attribute". Hmmm, I think I could have worked that out myself, what I would want to know in a beginners guide are what are these for, what might I use them for...and if I don't need to use them to create an emp/dept example them maybe it’s better to leave them out. All that said, would the book help me - yes it would.  It’s obvious that Vinod knows ADF and his style is relatively easy going and the book covers all that it has to, but I think the book could have done a better job in the educational side of guiding beginners.

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • PECL OCI8 2.0 Production Release Announcement

    - by cj
    The PHP OCI8 2.0.6 extension for Oracle Database is now "production" status. The source code is available on PECL. This can be used immediately to update your OCI8 extension in PHP 5.2 and later versions. The extension compiles with Oracle 10.2 or later client libraries. Oracle's standard cross-version database connectivity applies. OCI8 2.0 and PHP 5.5.5 RPMs for Oracle and Red Hat Linux are available from oss.oracle.com. Windows DLLs are available on PECL for PHP 5.3, PHP 5.4 and PHP 5.5. OCI8 2.0 source code will also be automatically included in the next major version of PHP. New Functionality Oracle Database 12c Implicit Result Set support. IRS's make it easy to pass query results back from stored PL/SQL procedures or anonymous PL/SQL blocks. Individual IRS statement resources, each corresponding to a single query, can be obtained with the new function oci_get_implicit_resultset(). These 'child' statement resources can be passed to any oci_fetch_* function. See Using PHP and Oracle Database 12c Implicit Result Sets and the PHP Manual: oci_get_implicit_resultset(). DTrace Dynamic Trace static probes. This well respected DTrace tracing framework is available on a number of platforms, including Oracle Linux. PHP OCI8 static user-space probes can be enabled with PHP's --enable-dtrace configuration option. See Using PHP DTrace on Oracle Linux. Documentation is also available in the PHP Manual OCI8 and DTrace Dynamic Tracing Improved Functionality Using oci_execute($s, OCI_NO_AUTO_COMMIT) for a SELECT no longer unnecessarily initiates an internal ROLLBACK during connection close. This can improve overall scalability by reducing "round trips" between PHP and the database. Changed Functionality PHP OCI8 2.0's minimum pre-requisites are now PHP 5.2 and Oracle client library 10.2. Later versions of both are usable and, in fact, recommended. Use the older PHP OCI8 1.4.10 extension when using PHP 4.3.9 through to PHP 5.1.x, or when only Oracle Database 9.2 client libraries are available. oci_set_*($connection, ...) meta data setting call error handling is fixed so that oci_error($connection) works for these calls. Note: The old, deprecated function aliases like ocilogon still exist but are not recommended for new applications. Phpinfo() Changes Some cosmetic changes were made to the output of php --ri oci8 and the phpinfo() function. The oci8.event and oci8.connection_class values are now shown only when the Oracle client libraries support the respective functionality. Connection statistics are now in a separate phpinfo() table. Temporary LOB and Collection support status lines in phpinfo() output were removed. These two features have always been enabled since 2007. Oci_internal_debug() Changes The oci_internal_debug() function is now a no-op. Use PHP's --enable-dtrace functionality with DTrace or SystemTap instead. References OCI8 Extension source code and Windows DLLs http://pecl.php.net/package/oci8 Oracle Linux RPMs oss.oracle.com PHP Manual for OCI8 OCI8 and DTrace Dynamic Tracing Oracle OpenWorld Conference paper What's New in Oracle Database 12c for PHP

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • Controlling server configurations with IPS

    - by barts
    I recently received a customer question regarding how they best could control which packages and which versions were used on their production Solaris 11 servers.  They had considered pointing each server at its own software repository - a common initial approach.  A simpler method leverages one of dependency mechanisms we introduced with Solaris 11, but is not immediately obvious to most people. Typically, most internal IT departments qualify particular versions for production use.  What this customer wanted to do was insure that their operations staff only installed internally qualified versions of Solaris on their servers.  The easiest way of doing this is to leverage the 'incorporate' type of dependency in a small package defined for each server type.  From the reference " Packaging and Delivering Software With the Image Packaging System in Oracle® Solaris 11.1":  The incorporate dependency specifies that if the given package is installed, it must be at the given version, to the given version accuracy. For example, if the dependent FMRI has a version of 1.4.3, then no version less than 1.4.3 or greater than or equal to 1.4.4 satisfies the dependency. Version 1.4.3.7 does satisfy this example dependency. The common way to use incorporate dependencies is to put many of them in the same package to define a surface in the package version space that is compatible. Packages that contain such sets of incorporate dependencies are often called incorporations. Incorporations are typically used to define sets of software packages that are built together and are not separately versioned. The incorporate dependency is heavily used in Oracle Solaris to ensurethat compatible versions of software are installed together. An example incorporate dependency is: depend type=incorporate fmri=pkg:/driver/network/ethernet/[email protected],5.11-0.175.0.0.0.2.1 So, to make sure only qualified versions are installed on a server, create a package that will be installed on the machines to be controlled.  This package will contain an incorporate dependency on the "entire" package, which controls the various components used to be build Solaris.  Every time a new version of Solaris has been qualified for production use, create a new version of this package specifying the new version of "entire" that was qualified.  Once this new control package is available in the repositories configured on the production server, the pkg update command will update that system to the specified version.  Unless a new version of the control package is made available, pkg update will report that no updates are available since no version of the control package can be installed that satisfies the incorporate constraint. Note that if desired, the same package can be used to specify which packages must be present on the system by adding either "require" or "group" dependencies; the latter permits removal of some of the packages, the former does not.  More details on this can be found in either the section 5 pkg man page or the previously mentioned reference document. This technique of using package dependencies to constrain system configuration leverages the SAT solver which is at the heart of IPS, and is basic to how we package Solaris itself.  

    Read the article

  • questions about dual-boot install Ubuntu 10.04 and Windows 7 on same hard drive

    - by Tim
    I'd like to dual-boot install Ubuntu 10.04 on the same hard drive as Windows 7 which has already been installed. As to sources on the internet: I found a website iinet about dual-boot installation of Ubuntu 10.10 and Windows 7 on the same hard drive, which I think more specific than the one on Ubuntu Community without specific version of the OSes. Since I am installing Ubuntu 10.04 instead of 10.10, my question is whether their installers are same or almost same and if I can follow iinet for my dual-boot installation? Or are there better websites for information about dual-boot installtion of Ubuntu 10.04 and Windows 7? As to shrinking Windows partitions to make free space for Ubuntu partitions: iinet uses the partition software in Ubuntu's installer to shrink the Windows partition. But I saw in many website that the partition software in Ubuntu's installer cannot guarantee shrinking Windows 7 partitions successfully, so they recommended in general to shrink Windows partitions under Windows itself using its softwares. For example, in Ubuntu Community, it says: Some people think that the Windows partition must be resized only from within Windows Vista and Windows 7 using the shrink/resize option. ... If you use GParted Partition Editor in the Ubuntu Live CD be careful. So I was wondering which way to go in my situation? As to partition for bootloader files: In iinet, I don't see there is a partition created and dedicated to boot files (i.e. Grub files). However, I saw in many websites strongly suggesting using a boot partition for Grub files, especially for the purpose of separation and protection from installed OS files. I was wondering which way I should choose and why? As to installing bootloader Grub, in iinet, I see that to install Grub it only needs to specify the hard drive device for bootloader installation. However, in ubuntuguide(for more than 2 OSes and Ubuntu 9.04), some commands are needed to run in order to put Grub configuration files in MBR, and OS partition, for the chain-load process (where to find the files for the next stage). In Ubuntu Community, there are some related sentences which I don't quite understand how to do in practice: the only thing in your computer outside of Ubuntu that needs to be changed is a small code in the MBR (Master Boot Record) of the first hard disk. The MBR code is changed to point to the boot loader in Ubuntu. If you have a problem with changing the MBR code, you might prefer to just install the code for pointing to GRUB to the first sector of your Ubuntu partition instead. If you do that during the Ubuntu installation process, then Ubuntu won't boot until you configure some other boot manager to point to Ubuntu's boot sector. Windows Vista no longer utilizes boot.ini, ntdetect.com, and ntldr when booting. Instead, Vista stores all data for its new boot manager in a boot folder. Windows Vista ships with an command line utility called bcdedit.exe, which requires administrator credentials to use. You may want to read http://go.microsoft.com/fwlink/?LinkId=112156 about it. Using a command line utility always has its learning curve, so a more productive and better job can be done with a free utility called EasyBCD, developed and mastered in during the times of Vista Beta already. EasyBCD is user friendly and many Vista users highly recommend EasyBCD. In what is quoted above, I was wondering how exactly I should change the MBR code to point to the bootloader in Ubuntu? if I fail to change MBR code, are the other suggested boot managers being bcdedit.exe and EasyBCD in Windows? With the three sources above, which one shall I follow? Thanks and regards

    Read the article

  • EPM troubleshooting Utilities

    - by THE
    (in via Maurice) "Are you keeping up-to-date with the latest troubleshooting utilities introduced from EPM 11.1.2.2? These are typically not described in product documentation, so you might miss references to them. The following five utilities may be run from the command line.(1) Deployment Report was introduced with EPM 11.1.2.2 (11 April 2012). It details logical web addresses, web servers, application ports, database connections, user directories, database repositories configured for the EPM system, data directories used by EPM system products, instance directories, FMW homes, deployment distory, et cetera. It also helps to keep you honest about whether you made changes to the system and at what times! Download Shared Services patch 13530721 to get the backported functionality in EPM 11.1.2.1. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/epmsys_registry.sh report deployment (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\epmsys_registry.bat report deployment (on Microsoft Windows). The output is saved under \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\deployment_report.html (2) Log Analysis has received more "press". It was released with EPM 11.1.2.3 and helps the user to slice and dice EPM logs. It has many parameters which are documented when run without parameters, when run with the -h parameter, or in the 'Readme' file. It has also been released as a standalone utility for EPM 11.1.2.3 and earlier versions. (Sign in to  My Oracle Support, click the 'Patches & Updates' tab, enter the patch number 17425397, and click the Search button. Download the appropriate platform-specific zip file, unzip, and read the 'Readme' file. Note that you must provide a proper value to a JAVA_HOME environment variable [pointer to the mother directory of the Java /bin subdirectory] in the loganalysis.bat | .sh file and use the -d parameter when running standalone.) Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/loganalysis.sh -h (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\loganalysis.bat -h (on Microsoft Windows). The output is saved under the \Oracle\Middleware\user_projects\epmsystem1\diagnostics\reports\ subdirectory. (3) The Registry Cleanup command may be used (without fear!) to clean up various corruptions which can  affect the Hyperion (database-based) Repository. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/registry-cleanup.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\registry-cleanup.bat (on Microsoft Windows). The actions are described on the command line. (4) The Remove Instance Command is only used if there are two or more instances configured on one computer and one of those should be deleted. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/remove-instance.sh (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\remove-instance.bat (on Microsoft Windows). (5) The Reset Configuration Tool was introduced with EPM 11.1.2.2. It nullifies Shared Services Hyperion Registry settings so that a service may be reconfigured. You may locate the values to substitute for <product> or <task> by scanning registry.html (generated by running epmsys_registry.bat | .sh). Find productNAME in INSTANCE_TASKS_CONFIGURATION and SYSTEM_TASKS_CONFIGURATION nodes and identify tasks by property pairs that have values of 'Configurated' or 'Pending'. Run it from /Oracle/Middleware/user_projects/epmsystem1/bin/resetConfigTask.sh -product <product> -task <task> (on Unix/Linux)\Oracle\Middleware\user_projects\epmsystem1\bin\resetConfigTask.bat -product <product> -task <task> (on Microsoft Windows). "Thanks to Maurice for this collection of utilities.

    Read the article

  • How to give my user permission to add/edit files on local apache server? [duplicate]

    - by Logan
    Possible Duplicate: How to make Apache run as current user I'm setting up my local test server again, and I seem to have forgotten how to successfully set up the LAMP server. I have installed LAMP server via tasksel command and I have configured the /var/www directory according to a guide I've found: After the lamp server installation you will need write permissions to the /var/www directory. Follow these steps to configure permissions. Add your user to the www-data group sudo usermod -a -G www-data <your user name> now add the /var/www folder to the www-data group sudo chgrp -R www-data /var/www now give write permissions to the www-data group sudo chmod -R g+w /var/www So logan user is now part of www-data group and the file/folder permissions look like the output below: logan@computer:/var/www$ ls -lart total 172 -rw-r--r-- 1 www-data www-data 1997 Oct 23 2010 wp-links-opml.php -rw-r--r-- 1 www-data www-data 3177 Nov 1 2010 wp-config-sample.php -rw-r--r-- 1 www-data www-data 3700 Jan 8 2012 wp-trackback.php -rw-r--r-- 1 www-data www-data 271 Jan 8 2012 wp-blog-header.php -rw-r--r-- 1 www-data www-data 395 Jan 8 2012 index.php -rw-r--r-- 1 www-data www-data 3522 Apr 10 2012 wp-comments-post.php -rw-r--r-- 1 www-data www-data 19929 May 6 2012 license.txt -rw-r--r-- 1 www-data www-data 18219 Sep 11 08:27 wp-signup.php -rw-r--r-- 1 www-data www-data 2719 Sep 11 16:11 xmlrpc.php -rw-r--r-- 1 www-data www-data 2718 Sep 23 12:57 wp-cron.php -rw-r--r-- 1 www-data www-data 7723 Sep 25 01:26 wp-mail.php -rw-r--r-- 1 www-data www-data 2408 Oct 26 15:40 wp-load.php -rw-r--r-- 1 www-data www-data 4663 Nov 17 10:11 wp-activate.php -rw-r--r-- 1 www-data www-data 9899 Nov 22 04:52 wp-settings.php -rw-r--r-- 1 www-data www-data 9175 Nov 29 19:57 readme.html -rw-r--r-- 1 www-data www-data 29310 Nov 30 08:40 wp-login.php drwxr-xr-x 14 root root 4096 Dec 24 17:41 .. drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-admin drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-includes -rw-rw-rw- 1 www-data www-data 3448 Dec 26 16:14 wp-config.php drwxrwxr-x 5 www-data www-data 4096 Dec 26 16:14 . drwx------ 6 www-data www-data 4096 Dec 26 16:19 wp-content Things work perfectly at http://localhost, I can view the website fine. The thing with this is that I will be working on a plugin for wordpress and I don't want to deal with separate owners under www directory to create or modify files/folders. When I give my user the ownership of /var/www recursively as logan:www-data I can create/modify files but cannot view the http://localhost. I get a Forbidden error. I'm assuming that this is because of the Apache's configuration? Which one is healthier or easier considering this is just a local test website, configuring apache to give user logan to view website and chmod /var/www logan:logan so that I can create files etc. without any sudo commands; or is it easier to configure user groups to get www-data user to act like my logan user? (Idk how that's possible, maybe putting www-data user under logan group?) Please shed some light to this subject. All I want is to be able to create/modifiy files under my user, and yet to be able to successfully view http://localhost I appreciate the help!

    Read the article

  • Convert old AVI files to a modern format

    - by iWerner
    Hi, we have a collection of old home videos that were saved in AVI format a long time ago. I want to convert these files to a more modern format because the Totem Movie Player that comes with Ubuntu 10.4 seems to be the only program capable of playing them. The files seem to be encoded with a MJPEG codec, and playing them in VLC or Windows Media Player plays only the sound but there is no video. Avidemux was able to open the files, but the quality of the video is severely degraded: The video skips frames and is interlaced (it's not interlaced when playing it in Totem). Neither ffmpeg nor mencoder seems to be able to read the video stream. mencoder reports that it is using ffmpeg's codec. Here's a section from its output: ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family [mjpeg @ 0x92a7260]mjpeg: using external huffman table [mjpeg @ 0x92a7260]mjpeg: error using external huffman table, switching back to internal Unsupported PixelFormat -1 Selected video codec: [ffmjpeg] vfm: ffmpeg (FFmpeg MJPEG) while running ffmpeg produces the following: $ ffmpeg -i input.avi output.avi FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 4 2010 12:35:30, gcc: 4.4.3 [avi @ 0x87952c0]non-interleaved AVI Input #0, avi, from 'input.avi': Duration: 00:00:15.24, start: 0.000000, bitrate: 22447 kb/s Stream #0.0: Video: mjpeg, yuvj422p, 720x544, 25 tbr, 25 tbn, 25 tbc Stream #0.1: Audio: pcm_s16le, 44100 Hz, stereo, s16, 1411 kb/s Output #0, avi, to 'output.avi': Stream #0.0: Video: mpeg4, yuv420p, 720x544, q=2-31, 200 kb/s, 90k tbn, 25 tbc Stream #0.1: Audio: mp2, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 0 fps= 0 q=0.0 Lsize= 143kB time=15.23 bitrate= 76.9kbits/s video:0kB audio:119kB global headers:0kB muxing overhead 20.101777% So the problem is that output does not contain any video, as evidenced by the video:0kB at the end. In all of the above cases the audio comes out fine. So my question is: What can I do to convert these files to a more modern format with more modern codecs?

    Read the article

  • Package Version Numbers, why are they so important

    - by Chris W Beal
    One of the design goals of IPS has been to allow people to easily move forward to a supported "Surface" of component. That is to say, when you  # pkg update your system, you get the latest set of components which all work together, based on the packages you already have installed. During development, this has meant simply you update to the latest "build" of the components. (During development, we build everything and publish everything every two weeks). Now we've released Solaris 11 using the IPS technologies, things are a bit more complicated. We need to be able to reflect all the types of Solaris release we are doing. For example Solaris Development builds, Solaris Update builds and "Support Repository Updates" (the replacement for patches) in the version scheme. So simply saying "151" as the build number isn't sufficient to articulate what you are running, or indeed what is available to update to In my previous blog post I talked about creating your own package, and gave an example FMRI of pkg://tools/[email protected],0.5.11-0.0.0 But it's probably more instructive to look at the FMRI of a Solaris package. The package "core-os" contains all the common utilities and daemons you need to use Solaris.  $ pkg info core-os Name: system/core-os Summary: Core Solaris Description: Operating system core utilities, daemons, and configuration files. Category: System/Core State: Installed Publisher: solaris Version: 0.5.11 Build Release: 5.11 Branch: 0.175.0.0.0.2.1 Packaging Date: Wed Oct 19 07:04:57 2011 Size: 25.14 MB FMRI: pkg://solaris/system/[email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z The FMRI is what we will concentrate on here. In this package "solaris" is the publisher. You can use the pkg publisher command to see where the solaris publisher gets it's bits from $ pkg publisher PUBLISHER TYPE STATUS URI solaris origin online http://pkg.oracle.com/solaris/release/ So we can see we get solaris packages from pkg.oracle.com.  The package name is system/core-os. These can be arbitrary length, just to allow you to group similar packages together. Now on the the interesting? bit, the versions, everything after the @ is part of the version. IPS will only upgrade to a "higher" version. [email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z core-os = Package Name0.5.11 = Component - in this case we're saying it's a SunOS 5.11 package, = separator5.11 = Built on version - to indicate what OS version you built the package on- = another separator0.175.0.0.0.2.1 = Branch Version : = yet another separator20111019T070457Z = Time stamp when the package was published So from that we can see the Branch Version seems rather complex. It is necessarily so, to allow us to describe the hierachy of releases we do In this example we see the following 0.175: is known as the trunkid, and is incremented each build of a new release of Solaris. During Solaris 11 this should not change  0: is the Update release for Solaris. 0 for FCS, 1 for update 1 etc 0: is the SRU for Solaris. 0 for FCS, 1 for SRU 1 etc 0: is reserved for future use 2: Build number of the SRU 1: Nightly ID - only important for Solaris developersTake a hypothetical example [email protected],5.11-0.175.1.5.0.4.1:<something> This would be build 4 of SRU 5 of Update 1 of Solaris 11 This is actually documented in a MOS article 1378134.1 Which you can read if you have a support contract.

    Read the article

< Previous Page | 547 548 549 550 551 552 553 554 555 556 557 558  | Next Page >