Search Results

Search found 16176 results on 648 pages for 'oracle calendar'.

Page 322/648 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • How-to dynamically filter model-driven LOV

    - by Frank Nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Often developers need to filter a LOV query with information obtained from an ADF Faces form or other where. The sample below shows how to define a launch popup listener configured on the launchPopupListener property of the af:inputListOfValues component to filter a list of values. <af:inputListOfValues id="departmentIdId"    value="#{bindings.DepartmentId.inputValue}"                                          model="#{bindings.DepartmentId.listOfValuesModel}"    launchPopupListener="#{PopupLauncher.onPopupLaunch}" … >         … </af:inputListOfValues> A list of values is queried using a search binding that gets created in the PageDef file of a view when a lis of value component gets added. The managed bean code below looks this search binding up to then add a view criteria that filters the query. Note: There is no public API yet available for the FacesCtrlLOVBinding class, which is why I use the internal package class it in the example. public void onPopupLaunch(LaunchPopupEvent launchPopupEvent) {   BindingContext bctx = BindingContext.getCurrent();   BindingContainer bindings = bctx.getCurrentBindingsEntry();   FacesCtrlLOVBinding lov =        (FacesCtrlLOVBinding)bindings.get("DepartmentId");   ViewCriteriaManager vcm =   lov.getListIterBinding().getViewObject().getViewCriteriaManager();             //make sure the view criteria is cleared   vcm.removeViewCriteria(vcm.DFLT_VIEW_CRITERIA_NAME);   //create a new view criteria   ViewCriteria vc =          new ViewCriteria(lov.getListIterBinding().getViewObject());   //use the default view criteria name   //"__DefaultViewCriteria__"   vc.setName(vcm.DFLT_VIEW_CRITERIA_NAME);   //create a view criteria row for all queryable attributes   ViewCriteriaRow vcr = new ViewCriteriaRow(vc);   //for this sample I set the query filter to DepartmentId 60.   //You may determine it at runtime by reading it from a managed bean   //or binding layer   vcr.setAttribute("DepartmentId", 60);   //also note that the view criteria row consists of all attributes   //that belong to the LOV list view object, which means that you can   //filter on multiple attributes   vc.addRow(vcr);             lov.getListIterBinding().getViewObject().applyViewCriteria(vc); }  Note: Instead of using the vcm.DFLT_VIEW_CRITERIA_NAME name you can also define a custom name for the view criteria.

    Read the article

  • PHP oci8 dll not loading on windows 64 bit XP. What am I doing wrong?

    - by user47354
    on win 64, I installed apache, php etc. Everything works fine, except the oracle part. I can connect to oracle from sql developer which means my tnsnames.ora file is correct. When apache starts, there are no errors in the logs. But when I try to connect to oracle from my database, oracle module php_oci8.dll is not loaded. What am I doing wrong? The oci8.dll line in php.ini is there, it is uncommented There are no errors in the apache logs extension_dir in php.ini file points to the correct location

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-28

    - by Bob Rhubart
    Beware the 'Facebook Effect' when service-orienting information technology | Joe McKenrick www.zdnet.com Experiences seen with Facebook provide a fair warning to shared-service providers in enterprises. Cookbook: SES and UCM setup | George Maggessy blogs.oracle.com WebCenter A-Team member George Maggessy guides you through setting up the integration between UCM and SES. Using Oracle VM with Amazon EC2 | Marc Fielding www.pythian.com "If you’re planning on running Oracle VM with Amazon EC2, there are some important limitations you should know about," says Pythian's Marc Fielding. Oracle Enterprise Pack for Eclipse 12.1.1 update on OTN blogs.oracle.com Oracle Enterprise Pack for Eclipse (OEPE) 12.1.1.0.1 was released to OTN last week with support for new standards and several new features. Thought for the Day "If the mind really is the finest computer, then there are a lot of people out there who need to be rebooted." — Tim Bryce

    Read the article

  • obiee memory usage

    - by user554629
    Heap memory is a frequent customer topic. Here's the quick refresher, oriented towards AIX, but the principles apply to other unix implementations. 1. 32-bit processes have a maximum addressability of 4GB; usable application heap size of 2-3 GB.  On AIX it is controlled by an environment variable: export LDR_CNTRL=....=MAXDATA=0x080000000   # 2GB ( The leading zero is deliberate, not required )   1a. It is  possible to get 3.25GB  heap size for a 32-bit process using @DSA (Discontiguous Segment Allocation)     export LDR_CNTRL=MAXDATA=0xd0000000@DSA  # 3.25 GB 32-bit only        One side-effect of using AIX segments "c" and "d" is that shared libraries will be loaded privately, and not shared.        If you need the additional heap space, this is worth the trade-off.  This option is frequently used for 32-bit java.   1b. 64-bit processes have no need for the @DSA option. 2. 64-bit processes can double the 32-bit heap size to 4GB using: export LDR_CNTRL=....=MAXDATA=0x100000000  # 1 with 8-zeros    2a. But this setting would place the same memory limitations on obiee as a 32-bit process    2b. The major benefit of 64-bit is to break the binds of 32-bit addressing.  At a minimum, use 8GB export LDR_CNTRL=....=MAXDATA=0x200000000  # 2 with 8-zeros    2c.  Many large customers are providing extra safety to their servers by using 16GB: export LDR_CNTRL=....=MAXDATA=0x400000000  # 4 with 8-zeros There is no performance penalty for providing virtual memory allocations larger than required by the application.  - If the server only uses 2GB of space in 64-bit ... specifying 16GB just provides an upper bound cushion.    When an unexpected user query causes a sudden memory surge, the extra memory keeps the server running. 3.  The next benefit to 64-bit is that you can provide huge thread stack sizes for      strange queries that might otherwise crash the server.      nqsserver uses fast recursive algorithms to traverse complicated control structures.    This means lots of thread space to hold the stack frames.    3a. Stack frames mostly contain register values;  64-bit registers are twice as large as 32-bit          At a minimum you should  quadruple the size of the server stack threads in NQSConfig.INI          when migrating from 32- to 64-bit, to prevent a rogue query from crashing the server.           Allocate more than is normally necessary for safety.    3b. There is no penalty for allocating more stack size than you need ...           it is just virtual memory;   no real resources  are consumed until the extra space is needed.    3c. Increasing thread stack sizes may require the process heap size (MAXDATA) to be increased.          Heap space is used for dynamic memory requests, and for thread stacks.          No performance penalty to run with large heap and thread stack sizes.           In a 32-bit world, this safety would require careful planning to avoid exceeding 2GM usable storage.     3d. Increasing the number of threads also may require additional heap storage.          Most thread stack frames on obiee are allocated when the server is started,          and the real memory usage increases as threads run work. Does 2.8GB sound like a lot of memory for an AIX application server? - I guess it is what you are accustomed to seeing from "grandpa's applications". - One of the primary design goals of obiee is to trade memory for services ( db, query caches, etc) - 2.8GB is still well under the 4GB heap size allocated with MAXDATA=0x100000000 - 2.8GB process size is also possible even on 32-bit Windows applications - It is not unusual to receive a sudden request for 30MB of contiguous storage on obiee.- This is not a memory leak;  eventually the nqsserver storage will stabilize, but it may take days to do so. vmstat is the tool of choice to observe memory usage.  On AIX vmstat will show  something that may be  startling to some people ... that available free memory ( the 2nd column ) is always  trending toward zero ... no available free memory.  Some customers have concluded that "nearly zero memory free" means it is time to upgrade the server with more real memory.   After the upgrade, the server again shows very little free memory available. Should you be concerned about this?   Many customers are !!  Here is what is happening: - AIX filesystems are built on a paging model.   If you read/write a  filesystem block it is paged into memory ( no read/write system calls ) - This filesystem "page" has its own "backing store" on disk, the original filesystem block.   When the system needs the real memory page holding the file block, there is no need to "page out".    The page can be stolen immediately, because the original is still on disk in the filesystem. - The filesystem  pages tend to collect ... every filesystem block that was ever seen since    system boot is available in memory.  If another application needs the file block, it is retrieved with no physical I/O. What happens if the system does need the memory ... to satisfy a 30MB heap request by nqsserver, for example? - Since the filesystem blocks have their own backing store ( not on a paging device )   the kernel can just steal any filesystem block ... on a least-recently-used basis   to satisfy a new real memory request for "computation pages". No cause for alarm.   vmstat is accurately displaying whether all filesystem blocks have been touched, and now reside in memory.   Back to nqsserver:  when should you be worried about its memory footprint? Answer:  Almost never.   Stop monitoring it ... stop fussing over it ... stop trying to optimize it. This is a production application, and nqsserver uses the memory it requires to accomplish the job, based on demand. C'mon ... never worry?   I'm from New York ... worry is what we do best. Ok, here is the metric you should be watching, using vmstat: - Are you paging ... there are several columns of vmstat outputbash-2.04$ vmstat 3 3 System configuration: lcpu=4 mem=4096MB kthr    memory              page              faults        cpu    ----- ------------ ------------------------ ------------ -----------  r  b    avm   fre  re  pi  po  fr   sr  cy  in   sy  cs us sy id wa  0  0 208492  2600   0   0   0   0    0   0  13   45  73  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   12  77  0  0 99  0  0  0 208492  2600   0   0   0   0    0   0   9   40  86  0  0 99  0 avm is the "available free memory" indicator that trends toward zerore   is "re-page".  The kernel steals a real memory page for one process;  immediately repages back to original processpi  "page in".   A process memory page previously paged out, now paged back in because the process needs itpo "page out" A process memory block was paged out, because it was needed by some other process Light paging activity ( re, pi, po ) is not a concern for worry.   Processes get started, need some memory, go away. Sustained paging activity  is cause for concern.   obiee users are having a terrible day if these counters are always changing. Hang on ... if nqsserver needs that memory and I reduce MAXDATA to keep the process under control, won't the nqsserver process crash when the memory is needed? Yes it will.   It means that nqsserver is configured to require too much memory and there are  lots of options to reduce the real memory requirement.  - number of threads  - size of query cache  - size of sort But I need nqsserver to keep running. Real memory is over-committed.    Many things can cause this:- running all application processes on a single server    ... DB server, web servers, WebLogic/WebSphere, sawserver, nqsserver, etc.   You could move some of those to another host machine and communicate over the network  The need for real memory doesn't go away, it's just distributed to other host machines. - AIX LPAR is configured with too little memory.     The AIX admin needs to provide more real memory to the LPAR running obiee. - More memory to this LPAR affects other partitions. Then it's time to visit your friendly IBM rep and buy more memory.

    Read the article

  • ODI 11g - Scripting a Reverse Engineer

    - by David Allan
    A common question is related to how to script the reverse engineer using the ODI SDK. This follows on from some of my posts on scripting in general and accelerated model and topology setup. Check out this viewlet here to see how to define a reverse engineering process using ODI's package. Using the ODI SDK, you can script this up using the OdiPackage and StepOdiCommand classes as follows;  OdiPackage pkg = new OdiPackage(folder, "Pkg_Rev"+modName);   StepOdiCommand step1 = new StepOdiCommand(pkg,"step1_cmd_reset");   step1.setCommandExpression(new Expression("OdiReverseResetTable \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step2 = new StepOdiCommand(pkg,"step2_cmd_reset");   step2.setCommandExpression(new Expression("OdiReverseGetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   StepOdiCommand step3 = new StepOdiCommand(pkg,"step3_cmd_reset");   step3.setCommandExpression(new Expression("OdiReverseSetMetaData \"-MODEL="+mod.getModelId()+"\"",null, Expression.SqlGroupType.NONE));   pkg.setFirstStep(step1);   step1.setNextStepAfterSuccess(step2);   step2.setNextStepAfterSuccess(step3); The biggest leap of faith for users is getting to know which SDK classes have to be used to build the objects in the design, using StepOdiCommand isn't necessarily obvious, once you see it in action though it is very simple to use. The above snippet uses an OdiModel variable named mod, its a snippet I added to the accelerated model creation script in the post linked above.

    Read the article

  • OWSM Policy Repository in JDeveloper - Tips & Tricks - 11g

    - by Prakash Yamuna
    In this blog post I discussed about the OWSM Policy Repository that is embedded in JDeveloper. However some times people may run into issues with the embedded repository. Here is screen snapshot that shows the error you may run into (click on the image for larger image): If you run into "java.lang.IllegalArgumentException: WSM-04694 : An invalid directory was provided to connect to a file-base MDS repository." this caused due to spaces in the folder name. Here is a quick way to workaround this issue by running "Jdeveloper.exe - su". Hope people find this useful!

    Read the article

  • Future Tech Duke

    - by Tori Wieldt
    Do you like the new Duke? Have you gotten the new Duke screensaver yet? Follow @java or Like I <3 Java on Facebook and get the latest 3D, animated "Future Tech Duke" screensaver.   If you haven't already, register now to watch the global July 7 Java 7 community celebration and learn more about Java moving forward. We are looking for questions from the community to be asked during the panel Q & A. Enter your questions as a comment here, or tweet it with #java7. There's lots of great content being created for Java 7: technical articles, videos, updated web pages (can you say "layer cake?"), T-shirts, presentations, and there will be lots of Java 7 content in the new Java Magazine. See you at the Java 7 celebration event! Duke will be there!

    Read the article

  • Gilda Garretón, a Java Developer and Parallelism Computing Researcher

    - by Yolande
    In a new interview titled “Gilda Garretón, a Java Developer and Parallelism Computing Research,” Garretón shares her first-hand experience developing with Java and Java 7 for very large-scale integration (VLSI) of computer-aided design (CAD). Garretón gives an insightful overview of how Java is contributing to the parallelism development and to the Electric VLSI Design Systems, an open source VLSI CAD application used as a research platform for new CAD algorithms as well as the research flow for hardware test chips.  Garretón considers that parallelism programming is hard and complex, yet important developments are taking place.  "With the addition of the concurrent package in Java SE 6 and the Fork/Join feature in Java SE 7, developers have a chance to rely more on existing frameworks and dedicate more time to the essence of their parallel algorithms." Read the full article here  

    Read the article

  • Database Web Service using Toplink DB Provider

    - by Vishal Jain
    With JDeveloper 11gR2 you can now create database based web services using JAX-WS Provider. The key differences between this and the already existing PL/SQL Web Services support is:Based on JAX-WS ProviderSupports SQL Queries for creating Web ServicesSupports Table CRUD OperationsThis is present as a new option in the New Gallery under 'Web Services'When you invoke the New Gallery option, it present you with three options to choose from:In this entry I will explain the options of creating service based on SQL queries and Table CRUD operations.SQL Query based Service When you select this option, on 'Next' page it asks you for the DB Conn details. You can also choose if you want SOAP 1.1 or 1.2 format. For this example, I will proceed with SOAP 1.1, the default option.On the Next page, you can give the SQL query. The wizard support Bind Variables, so you can parametrize your queries. Give "?" as a input parameter you want to give at runtime, and the "Bind Variables" button will get enabled. Here you can specify the name and type of the variable.Finish the wizard. Now you can test your service in Analyzer:See that the bind variable specified comes as a input parameter in the Analyzer Input Form:CRUD OperationsFor this, At Step 2 of Wizard, select the radio button "Generate Table CRUD Service Provider"At the next step, select the DB Connection and the table for which you want to generate the default set of operations:Finish the Wizard. Now, run the service in Analyzer for a quick check.See that all the basic operations are exposed:

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • “Big Data” Is A Small Concept Unless You Can Apply It To The Customer Experience

    - by Michael Hylton
    There’s been a lot of recent talk in the industry about “big data”.  Much can be said about the importance of big data and the results from it, but you need to always consider the customer experience when analyzing and applying customer data. Personalization and merchandising drive the user experience.  Big data should enable you to gain valuable insight into each of your customers and apply that insight at the moment they are on your Web site, talking to one of your call center agents, or any other touchpoint.  While past customer experience is important, you need to combine that with what your customer is doing on your Web site now as well what they are doing and saying on social networking sites.  It’s key to have a 360 degree view of your customer across all of your touchpoints in order to provide that relevant and consistent experience that they come to expect when interacting with your brand. Big data can enable you to effectively market, merchandize, and recommend the right products to the right customers and the right time.  By taking customer data and applying it to product recommendations, you have an opportunity to gain a greater share of wallet through the cross-selling and up-selling of additional products and services.  You can also build sustaining loyalty programs to continue to engage with your customers throughout their long-term relationship with your brand.

    Read the article

  • Portal 11.1.1.4 Certified with E-Business Suite

    - by Steven Chan
    Oracle Portal 11g allows you to build, deploy, and manage enterprise portals running on Oracle WebLogic Server.  Oracle Portal 11g includes integration with Oracle WebCenter Services 11g and BPEL, support for open portlet standards JSR 168, WSRP 2.0, and JSR 301.Portal 11g (11.1.1.4) is now certified with Oracle E-Business Suite Release.  Portal 11.1.1.4 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.4.0, also known as FMW 11g Patchset 3.  Certified E-Business Suite releases are:EBS Release 11i 11.5.10.2 + ATG RUP 7 and higherEBS Release 12.0.6 and higherEBS Release 12.1.1 and higherIf you're running a previous version of Portal, there are a number of certified and supported upgrade paths to Portal 11g (11.1.1.4):

    Read the article

  • Podcast Show Notes: William Ulrich and Neal McWhorter on Business Architecture

    - by Bob Rhubart
    The latest ArchBeat podcast program features a four-part conversation with William Ulrich and Neal McWhorter, the authors of Business Architecture: The Art and Practice of Business Transformation, available from Meghan-Kiffer Press. Listen to Part 1 Bill and Neal cover the basics and discuss the effects of the lack of business architecture on organizations. Listen to Part 2 (Jan 19) What really happens to the billions of dollars annually invested in IT. Listen to Part 3 (Jan 26) Why the IT and business sides of many organizations can’t play nice. Listen to Part 4 (Feb 2) How IT architects and business architects can work together to get the ship back on course and keep it there. Connect William Ulrich Website | LinkedIn | Business Architecture Guild Neal McWhorter Website | LinkedIn | Business Architecture Group on OMG Coming Soon Bob Hensle, Director, Oracle Enterprise Architecture Group, discusses the recently launched IT Solutions from Oracle (ITSO) library of documents. Excerpts from a recent OTN Architect Community Virtual Meet-up. Stay tuned: RSS del.icio.us Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network Technorati Tags: business architecture,enterprise architecture,arch2arch,archbeat,podcast,business transformation,oracle,oracle technology network

    Read the article

  • MySQL Connect Conference: My Experience

    - by Hema Sridharan
    It was a great experience to attend the MySQL Connect Conference for the first time ever. Personally I was very much enthralled to present about "How to make MySQL Backups" besides attending different sessions to absorb more knowledge about the technical prospects of MySQL. One of the agenda items in my presentation was "MySQL Enterprise Backup" functionality and features. There were total of 40 attendees in the session, who were very much interested about the MySQL Enterprise Backup product and gave positive feedback as well as areas of improvements on our product. Some of our features brought lot of excitement and smile amongst our customers including,1. Performance improvements in MEB 3.8.02. Incremental Base option from MEB 3.7.1 where there is no need to specify the directory name of the previous backup to fetch the lsn values and instead can directly fetch from backup_history table using --incremental-base=history: last_backup3. only-innodb-with-frm option introduced in MEB 3.7 version. A true online hot backup of InnoDB tables.I also attended a session with similar topic "MEB Best Practices" conducted by Sanjay Manwani, where he double clicked all the features and best strategies of backup & restore. I also got an opportunity to attend other sessions including,* Enabling the new generation of web and cloud services with MySQL 5.6 replication* Getting the most out of MySQL with MySQL Workbench* InnoDB compression for OLTP* Scaling for the Web and Cloud with MySQL replication.Above all, had some special moments in the conference including meeting some of the executives / colleagues for the first time f2f. On a whole, the first MySQL Connect conference was a great success in terms of manifesting the features of our products, direct feedback from customer and team building.  We also had some applauding yahoo moments when Tomas Ulin announced different releases including MySQL 5.6 RC, Connector Python 1.0 and ODBC 5.2 release, MySQL Cluster 7.3, additions to MySQL Enterprise edition etc.

    Read the article

  • SaaS Customer Service Matters

    - by charles.knapp
    You probably know that Oracle CRM On Demand goes beyond contact and transaction tracking by providing valuable real-time insights. Do you know that Oracle CRM On Demand also delivers valuable service to our customers? Don't take my word for it. "Prior to Oracle CRM On Demand, we were too busy looking in the rear view mirror on our sales activities and needed a forward-looking tool to maximize sales and coaching opportunities," said Christian Doelle, Vice President Sales & Marketing, MonierLifetile. "After evaluating other organization's solutions, we found Oracle as the most proven with the real-time reporting and detailed reviews of sales opportunities that helped us to address our blind spots. Additionally, we have found throughout our implementation phase that Oracle's commitment to customer attention and service is incomparable." Learn more here about MonierLifetile's experience with Oracle CRM On Demand.

    Read the article

  • Webcast Replay Available: E-Business Suite Release 12.1 Upgrade Best Practices - Technical Insight

    - by BillSawyer
    I am pleased to release the replay and presentation for the latest ATG Live Webcast: E-Business Suite Release 12.1 Upgrade Best Practices - Technical Insight (Presentation)Udayan Parvate, Director, E-Business Suite Release Engineering and Uday Moogala, Senior Principal Engineer, Applications Performance discussed the best practices that you can apply when upgrading your E-Business Suite instance to Release 12.1 and beyond. They discussed upgrade paths, resources, and practices to minimize downtime during the upgrade. (April 2012)Finding other recorded ATG webcastsThe catalog of ATG Live Webcast replays, presentations, and all ATG training materials is available in this blog's Webcasts and Training section.

    Read the article

  • Dynamic Bursting ... no really!

    - by Tim Dexter
    If any of you have seen me or my colleagues present BI Publisher to you then we have hopefully mentioned 'bursting.' You may have even seen a demo where we talk about being able to take a batch of data, say invoices. Then split them by some criteria, say customer id; format them with a template; generate the output and then deliver the documents to the recipients with a click. We and especially I, always say this can be completely dynamic! By this I mean, that you could store customer preferences in a database. What layout would each customer like; what output format they would like and how they would like the document delivered. We (I) talk a good talk, but typically don't do the walk in a demo. We hard code everything in the bursting query or bursting control file to get the concept across. But no more peeps! I have finally put together a dynamic bursting demo! Its been minutes in the making but its been tough to find those minutes! Read on ... It's nothing amazing in terms of making the burst dynamic. I created a CUSTOMER_PREFS table with some simple UI in an APEX application so that I can maintain their requirements. In EBS you have descriptive flexfields that could do the same thing or probably even 'contact' fields to store most of the info. Here's my table structure: Name                           Type ------------------------------ -------- CUSTOMER_ID                    NUMBER(6) TEMPLATE_TYPE                  VARCHAR2(20) TEMPLATE_NAME                  VARCHAR2(120) OUTPUT_FORMAT                  VARCHAR2(20) DELIVERY_CHANNEL               VARCHAR2(50) EMAIL                          VARCHAR2(255) FAX                            VARCHAR2(20) ATTACH                         VARCHAR2(20) FILE_LOC                       VARCHAR2(255) Simple enough right? Just need CUSTOMER_ID as the key for the bursting engine to join it to the customer data at burst time. I have not covered the full delivery options, just email, fax and file location. Remember, its a demo people :0) However the principal is exactly the same for each delivery type. They each have a set of attributes that need to be provided and you will need to handle that in your bursting query. On a side note, in EBS, you use a bursting control file, you can apply the same principals that I'm laying out here you just need to get the customer bursting info into the XML data stream so that you can refer to it in the control file using XPATH expressions. Next, we need to look up what attributes or parameters are required for each delivery method. that can be found in the documentation here.  Now we know the combinations of parameters and delivery methods we can construct the query using a series a decode statements: select distinct cp.customer_id "KEY", cp.template_name TEMPLATE, cp.template_type TEMPLATE_FORMAT, 'en-US' LOCALE, cp.output_format OUTPUT_FORMAT, 'false' SAVE_FORMAT, cp.delivery_channel DEL_CHANNEL, decode(cp.delivery_channel,'FILE', cp.file_loc , 'EMAIL', cp.email , 'FAX', cp.fax) PARAMETER1, decode(cp.delivery_channel,'FILE', c.cust_last_name||'_orders.pdf' ,'EMAIL','[email protected]' ,'FAX', 'faxserver.com') PARAMETER2, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX', null) PARAMETER3, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Your current orders' ,'FAX',NULL) PARAMETER4, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','Please find attached a copy of your current orders with BI Publisher, Inc' ,'FAX',NULL) PARAMETER5, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','false' ,'FAX',NULL) PARAMETER6, decode(cp.delivery_channel,'FILE',NULL ,'EMAIL','[email protected]' ,'FAX',NULL) PARAMETER7 from cust_prefs cp, customers c, orders_view ov where cp.customer_id = c.customer_id and cp.customer_id = ov.customer_id order by cp.customer_id Pretty straightforward, just need to test, test, test, the query and ensure it's bringing back the correct data based on each customers preferences. Notice the NULL values for parameters that are not relevant for a given delivery channel. You should end up with bursting control data that the bursting engine can use:  Now, your users can run the burst and documents will be formatted, generated and delivered based on the customer prefs. If you're interested in the example, I have used the sample OE schema data for the base report. The report files and CUST_PREFS table are zipped up here. The zip contains the data model (.xdmz), the report and templates (.xdoz) and the sql scripts to create and load data to the CUST_PREFS table.  Once you load the report into the catalog, you'll need to create the OE data connection and point the data model at it. You'll probably need to re-point the report to the data model too. Happy Bursting!

    Read the article

  • InfiniBand Enabled Diskless PXE Boot

    - by Neeraj Gupta
    When you want to bring up a compute server in your environment and need InfiniBand connectivity, usually you go through various installation steps. This could involve operating systems like Linux, followed by a compatible InfiniBand software distribution, associated dependencies and configurations. What if you just want to run some InfiniBand diagnostics or troubleshooting tools from a test machine ? What if something happened to your primary machine and while recovering in rescue mode, you also need access to your InfiniBand network ? Often times we use opensource community supported small Linux distributions but they don't come with required InfiniBand support and tools. In this weblog, I am going to provide instructions on how to add InfniBand support to a specific Linux image - Parted Magic.This is a free to use opensource Linux distro often used to recover or rescue machines. The distribution itself will not be changed at all. Yes, you heard it right ! I have built an InfiniBand Add-on package that will be passed to the default kernel and initrd to get this all working. Pr-requisites You will need to have have a PXE server ready on your ethernet based network. The compute server you are trying to PXE boot should have a compatible IB HCA and must be connected to an active IB network. Required Downloads Download the Parted Magic small distribution for PXE from Parted Magic website. Download InfiniBand PXE Add On package. Right Click and Download from here. Do not extract contents of this file. You need to use it as is. Prepare PXE Server Extract the contents of downloaded pmagic distribution into a temporary directory. Inside the directory structure, you will see pmagic directory containing two files - bzImage and initrd.img. Copy this directory in your TFTP server's root directory. This is usually /tftpboot unless you have a different setup. For Example: cp pmagic_pxe_2012_2_27_x86_64.zip /tmp cd /tmp unzip pmagic_pxe_2012_2_27_x86_64.zip cd pmagic_pxe_2012_2_27_x86_64 # ls -l total 12 drwxr-xr-x  3 root root 4096 Feb 27 15:48 boot drwxr-xr-x  2 root root 4096 Mar 17 22:19 pmagic cp -r pmagic /tftpboot As I mentioned earlier, we dont change anything to the default pmagic distro. Simply provide the add-on package via PXE append options. If you are using a menu based PXE server, then add an entry to your menu. For example /tftpboot/pxelinux.cfg/default can be appended with following section. LABEL Diskless Boot With InfiniBand Support MENU LABEL Diskless Boot With InfiniBand Support KERNEL pmagic/bzImage APPEND initrd=pmagic/initrd.img,pmagic/ib-pxe-addon.cgz edd=off load_ramdisk=1 prompt_ramdisk=0 rw vga=normal loglevel=9 max_loop=256 TEXT HELP * A Linux Image which can be used to PXE Boot w/ IB tools ENDTEXT Note: Keep the line starting with "APPEND" as a single line. If you use host specific files in pxelinux.cfg, then you can use that specific file to add the above mentioned entry. Boot Computer over PXE Now boot your desired compute machine over PXE. This does not have to be over InfiniBand. Just use your standard ethernet interface and network. If using menus, then pick the new entry that you created in previous section. Enable IPoIB After a few minutes, you will be booted into Parted Magic environment. Open a terminal session and see if InfiniBand is enabled. You can use commands like: ifconfig -a ibstat ibv_devices ibv_devinfo If you are connected to InfiniBand network with an active Subnet Manager, then your IB interfaces must have come online by now. You can proceed and assign IP address to them. This will enable you at IPoIB layer. Example InfiniBand Diagnostic Tools I have added several InfiniBand Diagnistic tools in this add-on. You can use from following list: ibstat, ibstatus, ibv_devinfo, ibv_devices perfquery, smpquery ibnetdiscover, iblinkinfo.pl ibhosts, ibswitches, ibnodes Wrap Up This concludes this weblog. Here we saw how to bring up a computer with IPoIB and InfiniBand diagnostic tools without installing anything on it. Its almost like running diskless !

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >