Search Results

Search found 56530 results on 2262 pages for 'nandu com'.

Page 199/2262 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • "ODM" - One of the Support team's most valued acronyms

    - by graham.mckendry(at)oracle.com
    If you submit technical service requests (SRs) through the My Oracle Support portal, you may often see the term "ODM" used in updates from our Support team. ODM is an acronym for "Oracle Diagnostic Methodology", which defines a standard problem solving approach that all of Oracle Support uses for every technical SR. ODM provides a number of benefits to the SRs - both for the Support organization and for the customer - including a consistent approach, higher quality, justified solutions, and ultimately faster resolution. Screenshot: Example of an ODM "Issue Clarification" activity in a service request The Oracle Diagnostic Methodology applies to both categories of technical SRs: Consultative (question-answer topics) and Problem-Solution. There are a few KM Notes that describe the steps of ODM, however to keep things simple (and since those KM Notes appear to be a bit outdated), I'll summarize the ODM stages here as follows: Consultative ODM - Three mandatory stages: ODM Question: Clarification of the customer's exact question. ODM Answer: Thorough answer to the customer's question. ODM Knowledge Content: Reference to new or existing knowledge base content, or explanation why the particular SR does not necessarily require knowledge content. Problem-Solution ODM - Eight mandatory stages: ODM Issue Clarification: Clarification of the reported issue, including the symptoms, the steps to reproduce, and an outline of the business impact ODM Issue Verification: Confirmation of the issue being verified based on proof provided by the customer, such as screenshots, log files, or reproducing the issue during an Oracle Web Conference. ODM Cause Determination: Succinct outline of the root cause of the issue. ODM Cause Justification: Explanation as to why the root cause applies to this particular situation. ODM Proposed Solution(s): Succinct outline of the potential solution(s) to resolve the issue. ODM Proposed Solution(s) Justification: Explanation of why the proposed solution(s) will in fact resolve the issue. ODM Solution Action Plan: Detailed numbered instructions on how to execute the proposed solutions. ODM Knowledge Content: Reference to new or existing knowledge base content, or explanation why the particular SR does not necessarily require knowledge content. During these stages, you may see other optional ODM-related activities such as "ODM Data Collection", "ODM Action Plan", "ODM Research", and "ODM Test Case". Again, these structured tags help ensure a uniform methodology across your SRs. With this knowledge you should be able to develop better predictability of what's coming next in your SRs, as well as what you can do to help expedite the resolution process.

    Read the article

  • Missed the AutoVue 20.0 Webcast? Watch the Replay!

    - by [email protected]
    With today's busy schedules, it's oftentimes hard to get to everything we intended on a given day. Unfortunately, that sometimes means missing live webcasts of our favourite topics. Well we've got good news. For those that missed last month's webcast featuring the latest release of AutoVue 20.0, we have good news. The webast recording is now available for you to watch on demand and at your convenience, so click here to watch the replay You'll learn about all that is new and compelling in release 20.0, as well as see a demo highlighting some of the key new capabilities.

    Read the article

  • Part 5: Choose the right tool - or - why

    - by volker.eckardt(at)oracle.com
    Consider the following client request “Please create a report for us to list expenses”. Which Oracle EBS tool would you choose? There are plenty of options available: Oracle Reports, or BI Publisher with PDF or Excel layout, or Discoverer, or BI Publisher Stand Alone, or PDF online generation, or Oracle WebADI, or Plain SQL*Plus as Concurrent Program, or Online review option … Assuming, you as development lead have to decide, you may decide by available skill set in your development team. However, is this a good decision? An important question to influence the decision is the “Why” question: why do you need this report, what process is behind, what exactly you like to achieve? We see often data created or printed, although it would be much better to get the data in Excel, and upload changes via WebADI directly. There are more points that should drive your decision: How many of such requirements you have got? Has this technique been used in the project already? Are there related reusable’s you may gain from? How difficult is it to maintain your solution? Can you merge this report with another one, to reduce test and maintenance work? In addition, also your own development standards should guide you a bit to come to a good decision. In one of my own projects, we discussed such topics in our weekly team meeting. By utilizing the team knowledge best, you may come to a better decision, and additionally, your team supports your decision. Unfortunately, I have rarely seen dedicated team trainings or planned knowledge transfer to support such processes. Often the pressure to deliver on time is too high to have discussion and decision time left. But exactly this can help keeping maintenance costs low by limiting the number of alternative solutions for similar requirements. Lastly, design decisions should be documented to allow another person taking this over easily. Decisions shall be reviewed and updated regularly, to reflect related procedures or Oracle products respective product versions. Summary: Oracle EBS offers plenty of alternatives to implement customizations. Create and maintain a decision tree to support the design process. Do not leave the decision just on developer side. Limit the number of alternative solutions as best as possible; choose one which is the most appropriate also from future maintenance perspective.

    Read the article

  • Oracle OpenWorld & JavaOne + Develop 2010

    - by [email protected]
    ?????? ?????????? ????????? ??????????? ??? ?????????? Oracle OpenWorld 2010 19-23 ???????? 2010 Moscone Center, San Francisco, CA ?? ??????????? Oracle Openworld 2010 ?????? ???????????? ??????????? - Applications, Database, Middleware - ? ?????????????? ??????? Oracle, ????? ??????? ???????????? ??????? ? ??????? ???????? (Server and Storage Systems) ? ????? ??? 50 ???????.    ???????? ????? ????????? ?????????? ????? ?? ??????????? ????? ??????????? ????????? ??????? ????? ?????????? ? ??????????? ????? ???????? ? ?????????? ?? ???????? Oracle ? ?????????? ?? ?????? ? ?????????? 

    Read the article

  • iSeminar: WebCenter JDEdwards & Siebel Application Integration

    - by kellsey.ruppel(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}View this iSeminar to see a demonstration of the Oracle WebCenter Suite Integration with JD Edwards and Siebel Enterprise Applications.

    Read the article

  • VirtualBox Port Forward

    - by john.graves(at)oracle.com
    A great new feature in VirtualBox 4.0 is the ability to use NAT networking and forward ports without needing to use ssh -L/-R tricks.  This is great for booting multiple VM domains simultaneously.  It is possible to have several instances which map back to the host machine and different ports on localhost:* automatically forward to the correct VM.  This avoids the hassle of setting up dns entries or static IP addresses.In this example, I'm mapping the host ports 3xxxx to the VM's well known server ports.Note: It is important to setup the Frontend HTTP host/port to avoid incorrect URL rewriting.You may also need to setup an http channel to deal with local traffic which uses the network address 10.0.2.15Happy VMing.

    Read the article

  • AndEngine GLES2 - getting Black screen on emulator 4.1

    - by dizworld.com
    I'm new in andengine . I create following code public class MainActivity extends BaseGameActivity { static final int CAMERA_WIDTH = 800; static final int CAMERA_HEIGHT = 480; public Font mFont; public Camera mCamera; //A reference to the current scene public Scene mCurrentScene; public static BaseActivity instance; public EngineOptions onCreateEngineOptions() { instance = this; mCamera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT); return new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(CAMERA_WIDTH, CAMERA_HEIGHT), mCamera); } @Override public void onCreateResources(OnCreateResourcesCallback arg0) throws Exception { mFont = FontFactory.create(this.getFontManager(),this.getTextureManager(), 256, 256,Typeface.create(Typeface.DEFAULT, Typeface.BOLD), 32); mFont.load(); } @Override public void onCreateScene(OnCreateSceneCallback arg0) throws Exception { mEngine.registerUpdateHandler(new FPSLogger()); mCurrentScene = new Scene(); Log.v("Scene","enter"); mCurrentScene.setBackground(new Background(0.09804f, 0.7274f, 0.8f)); // return mCurrentScene; } @Override public void onPopulateScene(Scene arg0, OnPopulateSceneCallback arg1) throws Exception { // TODO Auto-generated method stub } } I got code on sites there is returning scene but in AndEngine GLES2 in method onCreateScene() there is no return scene ... so my First run is BLACK .. any suggestion :)

    Read the article

  • [WebLogic, Java] WebLogic Developer/Production Web Profile, Full Java EE 6 Platform - Chat Transcript and Slides from OTN Virtual Developer Day

    - by yosuke.arai(at)oracle.com
    ????????????????????????WebLogic Server Virtual Developer Days??JavaEE6???????QA???????????????????(FireFox???????????????????????????????????????) ?????????????????????????????! > Q1) WLS10.3.4??JavaEE6????????????? > Q4) Java EE6 ???????IDE????????? > Q12) Jdeveloper?Java EE 6?????????????? > Q26) managed beans?EJB????????????????EJB??????????????????????managed beans?????????????????????Managed Beans?????????? > Q29) XML???????????????????????????????

    Read the article

  • Unity: Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics and I'd like to know what is the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have 2D png for players placeholders and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However it would be a waste of memory and thus less efficient. What do you suggest me?

    Read the article

  • Upcoming event - Oracle Solaris 11: What?s New Since the Launch

    - by nospam(at)example.com (Joerg Moellenkamp)
    On April 25th an webbased event about Solaris 11 takes place: It's named Oracle Solaris 11: What?s New Since the Launch. Agenda 9:00 a.m. PDTKeynote: Oracle Solaris - Strategy and UpdateMarkus Flierl, Vice President, Oracle Solaris Engineering 9:40 a.m. PDTOracle Solaris 11: Extreme Engineering - A Technical UpdateDan Price, Senior Principal Product Engineer, Oracle Solaris Engineering Bart Smaalders, Senior Principal Product Engineer, Oracle Solaris Engineering 10:20 a.m. PDTCustomers and Partners: Why We Moved to Oracle Solaris 11 A discussion of the reasons why businesses and commercial software developers have adopted Oracle Solaris 11, from the people responsible for these decisions 11:00 a.m. PDTOracle Solaris: Core to the Oracle Systems StrategyJohn Fowler, Executive Vice President of Systems, Oracle 9:00 am PDT is 18:00 in Berlin, 17:00 in London and i assume much to late in Tokyo with 01:00 am the next day ...

    Read the article

  • Hardware issues on Samsung NF208 (NF210)

    - by KristoferA - Huagati.com
    I'm trying to get Ubuntu 10.10 running on my wife's Samsung NF208 (same spec as NF210, but shipped without OS), and I have run into a pile of problems: At first, there were problems with audio, display brightness, and WiFi, so I reinstalled Ubuntu from scratch. After reinstalling, the audio has started working but the WiFi loses network access all the time and then it takes 5-10 minutes for it to reconnect to the network. Also, display brightness is at its lowest. I have tried to use the brightness command but it won't run. Is this system utterly incompatible with Ubuntu, or are there working WiFi and display drivers for it somewhere? I have googled for days but haven't found anything useful. Help me. Update: I never got it working properly. I came across lots of useful tips and tricks over at the forum linked to in the accepted answer but I just wasn't able to get it working and stable enough for the intended use. Hopefully a future version of Ubuntu and/or the samsung tools will solve that. Related thread over at the other forum: http://www.voria.org/forum/viewtopic.php?f=3&t=682

    Read the article

  • it started with a broken package

    - by Inteloidl.com
    all started when i tried to updated to the latest lts, but the problen is that the update interumpted and some broken packages apeared, i used synaptics to repair, but it wont repair, iv tried some console from some forum topic ... and in the end when i try to open synaptic or update managre this is what apears any ideas : E: Dynamic MMap ran out of room. Please increase the size of APT::Cache-Limit. Current value: 25165824. (man 5 apt.conf) E: Error occurred while processing mantis (NewVersion1) E: Problem with MergeList /var/lib/apt/lists/c.archive.ubuntu.com_ubuntu_dists_lucid_universe_binary-i386_Packages E: The package lists or status file could not be parsed or opened. E: _cache-open() failed, please report. W: Unable to munmap

    Read the article

  • Display monitor info via command line

    - by icyrock.com
    Is there a way to query monitor information from command line? For example, get monitor model, similar to e.g. what lspci does for graphic card info, or whether it's currently on or off, things like that. If possible, what kinds of basic information such as the above can be easily gathered? For example, is it possible to determine if monitor is in portrait or landscape position? Or if it has built-in speakers or not? Command line is the preference, but if there's a GUI method, I'd like to hear about it, too.

    Read the article

  • Dental adventures ...

    - by nospam(at)example.com (Joerg Moellenkamp)
    Today my dentist pulled all the stops. Really. Just to explain ... half a year ago or so, i got the base part of an dental implant in my jaw. Today i got the teeth ... beside other things. At first i made the acquaintance of a really weird instrument to remove a temporary tooth crown ... it's called "Hirtenstab" in german ... or "crown remover" in english ... i would call it tool of torture. After the fourth time using it, the temporary flew through the room and the final crowns were set on two teeth left and right of the implant. But the strangest instrument i saw was the instrument to fix the abatement (the thing between the crown and the screw thread implanted in the jaw). They really use a torque wrench (albeit a really small one) to screw the abatement into the jaw. Well .. at least i have a zirconium dioxide smile now ...

    Read the article

  • Performance impact of Zones.

    - by nospam(at)example.com (Joerg Moellenkamp)
    I was really astonished when i saw this question. Because this question was a old acquaintance from years ago, that i didn't heard for a long time. However there was it again. The question: "What's the overhead of Zones?". Sun was and Oracle is not saying "zero". We saying saying minimal. However during all the performance analysis gigs on customer systems i made since the introduction of Zones i failed to measure any overhead caused by zones. What i saw however, was additional load intoduced by processes that wouldn't be there when you would use only one zone Like additional monitoring daemons, like additional daemons having a controlling or supervising job for the application that resulted in slighly longer runtimes of processes, because such additional daemons wanted some cycles on the CPU as well. So i ask when someone wants to tell me that he measured a slight slowdown, if he or she has really measured the impact of the virtualization layer or of a side effect described above. It seems to be a little bit hard to believe, that a virtualisation technology has no overhead, however keep in mind that there is no hypervisor and just one kernel running that looks and behaves like many operating system instances to apps and users. While this imposes some limits to the technology (because there is just one kernel running you can't have zones with different kernels versions running ... obvious even to the cursory observer), but that is key to it's lightweightness and thus to the low overhead. Continue reading "Performance impact of Zones."

    Read the article

  • Controlling the order in which files get processed

    - by [email protected]
    The File/Ftp Adapter allows you to control the order in which files get processed. For example, you might want the files to be processed in order of their modified times/ file sizes etc. Luckily, the File/Ftp adapters allow you to achieve this via a "FileSorter" attribute that you can define in the JCA file for your inbound File/Ftp Adapter service.   The File/Ftp Adapters ship with two predefined sorters that use the last modified times e.g.   However, there are times when you would like to define the order yourself. In situations like this, you can implement a Java Comparator and register the comparator with the File Adapter as described below: 1) Write a comparator. For example, the FileSizeSorter comparator sorts the files in descending order of their sizes:   2) In order to compile this class though, you will need fileAdapter.jar in the classpath.  

    Read the article

  • Cloud Computing in words of one syllable

    - by harry.foxwell(at)oracle.com
    A colleague of mine challenged me to describe Cloud Computing in words of one syllable so that even his 80-year-old mother-in-law could understand the concept.  Hmmmm...The Cloud lets you do all your work on the Web or on your own net. It lets you set up your own work; no one has to set it up for you.  When you need more disk space, the cloud makes it for you.  When you need more speed, the cloud adds more gear to make your jobs go fast.  You share the cloud with more than just your own work, and you just pay for what you use.  The cloud is not new; this type of work has been done for years; just the word is new.  Now you know what the cloud is.  Or not.

    Read the article

  • Render 2D textures on a 3D object's face

    - by www.Sillitoy.com
    I am not familiar with 3D graphics, and I'd like to know the right way to render some 2D figures on different points of a wider face of a 3D object. My 3D object is just a cube representing a poker table. I have a 2D png for players' placeholders, and I'd like to render these figures on the 3D object where needed. An alternative solution would be to render the whole face with a big picture containing all the placeholders figures. However, it would be a waste of memory and thus less efficient. What do you suggest?

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • Deduping your redundancies

    - by nospam(at)example.com (Joerg Moellenkamp)
    Robin Harris of Storagemojo pointed to an interesting article about about deduplication and it's impact to the resiliency of your data against data corruption on ACM Queue. The problem in short: A considerable number of filesystems store important metadata at multiple locations. For example the ZFS rootblock is copied to three locations. Other filesystems have similar provisions to protect their metadata. However you can easily proof, that the rootblock pointer in the uberblock of ZFS for example is pointing to blocks with absolutely equal content in all three locatition (with zdb -uu and zdb -r). It has to be that way, because they are protected by the same checksum. A number of devices offer block level dedup, either as an option or as part of their inner workings. However when you store three identical blocks on them and the devices does block level dedup internally, the device may just deduplicated your redundant metadata to a block stored just once that is stored on the non-voilatile storage. When this block is corrupted, you have essentially three corrupted copies. Three hit with one bullet. This is indeed an interesting problem: A device doing deduplication doesn't know if a block is important or just a datablock. This is the reason why I like deduplication like it's done in ZFS. It's an integrated part and so important parts don't get deduplicated away. A disk accessed by a block level interface doesn't know anything about the importance of a block. A metadata block is nothing different to it's inner mechanism than a normal data block because there is no way to tell that this is important and that those redundancies aren't allowed to fall prey to some clever deduplication mechanism. Robin talks about this in regard of the Sandforce disk controllers who use a kind of dedup to reduce some of the nasty effects of writing data to flash, but the problem is much broader. However this is relevant whenever you are using a device with block level deduplication. It's just the point that you have to activate it for most implementation by command, whereas certain devices do this by default or by design and you don't know about it. However I'm not perfectly sure about that ? given that storage administration and server administration are often different groups with different business objectives I would ask your storage guys if they have activated dedup without telling somebody elase on their boxes in order to speak less often with the storage sales rep. The problem is even more interesting with ZFS. You may use ditto blocks to protect important data to store multiple copies of data in the pool to increase redundancy, even when your pool just consists out of one disk or just a striped set of disk. However when your device is doing dedup internally it may remove your redundancy before it hits the nonvolatile storage. You've won nothing. Just spend your disk quota on the the LUNs in the SAN and you make your disk admin happy because of the good dedup ratio However you can just fall in this specific "deduped ditto block"trap when your pool just consists out of a single device, because ZFS writes ditto blocks on different disks, when there is more than just one disk. Yet another reason why you should spend some extra-thought when putting your zpool on a single LUN, especially when the LUN is sliced and dices out of a large heap of storage devices by a storage controller. However I have one problem with the articles and their specific mention of ZFS: You can just hit by this problem when you are using the deduplicating device for the pool. However in the specifically mentioned case of SSD this isn't the usecase. Most implementations of SSD in conjunction with ZFS are hybrid storage pools and so rotating rust disk is used as pool and SSD are used as L2ARC/sZIL. And there it simply doesn't matter: When you really have to resort to the sZIL (your system went down, it doesn't matter of one block or several blocks are corrupt, you have to fail back to the last known good transaction group the device. On the other side, when a block in L2ARC is corrupt, you simply read it from the pool and in HSP implementations this is the already mentioned rust. In conjunction with ZFS this is more interesting when using a storage array, that is capable to do dedup and where you use LUNs for your pool. However as mentioned before, on those devices it's a user made decision to do so, and so it's less probable that you deduplicating your redundancies. Other filesystems lacking acapability similar to hybrid storage pools are more "haunted" by this problem of SSD using dedup-like mechanisms internally, because those filesystem really store the data on the the SSD instead of using it just as accelerating devices. However at the end Robin is correct: It's jet another point why protecting your data by creating redundancies by dispersing it several disks (by mirror or parity RAIDs) is really important. No dedup mechanism inside a device can dedup away your redundancy when you write it to a totally different and indepenent device.

    Read the article

  • EOFs in Solaris 11

    - by nospam(at)example.com (Joerg Moellenkamp)
    Well ? from comments here and elsewhere, the two most worst things seemed to be the the removal of 32-bit support and removal of support for certain components. Just to set things into perspective: Solaris 10 was released 2005, the newsest class of machines not supported by it were the Ultra1. This one was released 1995. The UltraSPARC-Systems not able to run on Solaris 11 were released 2001. Well ? we have 2011 now ?. Regarding 32-bit support: Well ? I don't think "playing around with Solaris on old gear" is the problem. At first, most people are playing around with virtual machines. But there is something different: 64-bit computing was introduced for x86 in 2003 (yes ? it's really that old). I think this move is more hurting to the people using boards with the first-gen Intel Atom "Silverthorne" as small file servers. And then Solaris 10 won't disappear with Solaris 11

    Read the article

  • Solaris 11 features: nscfg

    - by nospam(at)example.com (Joerg Moellenkamp)
    As you may have noticed many configuration tasks around name services have moved into the SMF in Solaris 11. However you don't have to use the svccfg command in order to configure them, you could still use the old files. However you can't just edit them, you have to import the data into the SMF repository. There are many reasons for this need but the ultimate one is in the start method. I will explain that later. In this article i want to explain, how nscfg can help you with with the naming service configuration of your system. Continue reading "Solaris 11 features: nscfg"

    Read the article

  • Performance impact of the new mtmalloc memory allocator

    - by nospam(at)example.com (Joerg Moellenkamp)
    I wrote at a number of occasions (here or here), that it could be really beneficial to use a different memory allocator for highly-threaded workloads, as the standard allocator is well ... the standard, however not very effective as soon as many threads comes into play. I didn't wrote about this as it was in my phase of silence but there was some change in the allocator area, Solaris 10 got a revamped mtmalloc allocator in version Solaris 10 08/11 (as described in "libmtmalloc Improvements"). The new memory allocator was introduced to Solaris development by the PSARC case 2010/212. But what's the effect of this new allocator and how does it works? Rickey C. Weisner wrote a nice article with "How Memory Allocation Affects Performance in Multithreaded Programs" explaining the inner mechanism of various allocators but he also publishes test results comparing Hoard, mtmalloc, umem, new mtmalloc and the libc malloc. Really interesting read and a must for people running applications on servers with a high number of threads.

    Read the article

  • A new number one

    - by nospam(at)example.com (Joerg Moellenkamp)
    The Top500 supercomputer list has a new number one: The K Computer, built by Fujitsu, currently combines 68544 SPARC64 VIIIfx CPUs, each with eight cores, for a total of 548,352 cores?almost twice as many as any other system in the TOP500. The K Computer is also more powerful than the next five systems on the list combined.Interestingly this system runs under Linux. And it uses tofu as its interconnect

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >