Search Results

Search found 1441 results on 58 pages for 'troubleshooting'.

Page 37/58 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • When runs a product out of support?

    That is a question I get regularly from customers. Microsoft has a great site where you can find that information. Unfortunately this site is not easy to find, and a lot of people are not aware of this site. A good reason to promote it a little. So if you ever get a question on this topic, go to http://support.microsoft.com/lifecycle/search/Default.aspx. At that site, you can find also the details of the policy Microsoft Support Lifecycle Policy The Microsoft Support Lifecycle policy took effect in October 2002, and applies to most products currently available through retail purchase or volume licensing and most future release products. Through the policy, Microsoft will offer a minimum of: 10 years of support (5 years Mainstream Support and 5 years Extended Support) at the supported service pack level for Business and Developer products 5 years Mainstream Support at the supported service pack level for Consumer/Hardware/Multimedia products 3 years of Mainstream Support for products that are annually released (for example, Money, Encarta, Picture It!, and Streets & Trips) Phases of the Support Lifecycle Mainstream Support Mainstream Support is the first phase of the product support lifecycle. At the supported service pack level, Mainstream Support includes: Incident support (no-charge incident support, paid incident support, support charged on an hourly basis, support for warranty claims) Security update support The ability to request non-security hotfixes Please note: Enrollment in a maintenance program may be required to receive these benefits for certain products Extended Support The Extended Support phase follows Mainstream Support for Business and Developer products. At the supported service pack level, Extended Support includes: Paid support Security update support at no additional cost Non-security related hotfix support requires a separate Extended Hotfix Support Agreement to be purchased (per-fix fees also apply) Please note: Microsoft will not accept requests for warranty support, design changes, or new features during the Extended Support phase Extended Support is not available for Consumer, Hardware, or Multimedia products Enrollment in a maintenance program may be required to receive these benefits for certain products Self-Help Online Support Self-Help Online Support is available throughout a product's lifecycle and for a minimum of 12 months after the product reaches the end of its support. Microsoft online Knowledge Base articles, FAQs, troubleshooting tools, and other resources, are provided to help customers resolve common issues. Please note: Enrollment in a maintenance program may be required to receive these benefits for certain products (source: http://support.microsoft.com/lifecycle/#tab1)

    Read the article

  • Option Trading: Getting the most out of the event session options

    - by extended_events
    You can control different aspects of how an event session behaves by setting the event session options as part of the CREATE EVENT SESSION DDL. The default settings for the event session options are designed to handle most of the common event collection situations so I generally recommend that you just use the defaults. Like everything in the real world though, there are going to be a handful of “special cases” that require something different. This post focuses on identifying the special cases and the correct use of the options to accommodate those cases. There is a reason it’s called Default The default session options specify a total event buffer size of 4 MB with a 30 second latency. Translating this into human terms; this means that our default behavior is that the system will start processing events from the event buffer when we reach about 1.3 MB of events or after 30 seconds, which ever comes first. Aside: What’s up with the 1.3 MB, I thought you said the buffer was 4 MB?The Extended Events engine takes the total buffer size specified by MAX_MEMORY (4MB by default) and divides it into 3 equally sized buffers. This is done so that a session can be publishing events to one buffer while other buffers are being processed. There are always at least three buffers; how to get more than three is covered later. Using this configuration, the Extended Events engine can “keep up” with most event sessions on standard workloads. Why is this? The fact is that most events are small, really small; on the order of a couple hundred bytes. Even when you start considering events that carry dynamically sized data (eg. binary, text, etc.) or adding actions that collect additional data, the total size of the event is still likely to be pretty small. This means that each buffer can likely hold thousands of events before it has to be processed. When the event buffers are finally processed there is an economy of scale achieved since most targets support bulk processing of the events so they are processed at the buffer level rather than the individual event level. When all this is working together it’s more likely that a full buffer will be processed and put back into the ready queue before the remaining buffers (remember, there are at least three) are full. I know what you’re going to say: “My server is exceptional! My workload is so massive it defies categorization!” OK, maybe you weren’t going to say that exactly, but you were probably thinking it. The point is that there are situations that won’t be covered by the Default, but that’s a good place to start and this post assumes you’ve started there so that you have something to look at in order to determine if you do have a special case that needs different settings. So let’s get to the special cases… What event just fired?! How about now?! Now?! If you believe the commercial adage from Heinz Ketchup (Heinz Slow Good Ketchup ad on You Tube), some things are worth the wait. This is not a belief held by most DBAs, particularly DBAs who are looking for an answer to a troubleshooting question fast. If you’re one of these anxious DBAs, or maybe just a Program Manager doing a demo, then 30 seconds might be longer than you’re comfortable waiting. If you find yourself in this situation then consider changing the MAX_DISPATCH_LATENCY option for your event session. This option will force the event buffers to be processed based on your time schedule. This option only makes sense for the asynchronous targets since those are the ones where we allow events to build up in the event buffer – if you’re using one of the synchronous targets this option isn’t relevant. Avoid forgotten events by increasing your memory Have you ever had one of those days where you keep forgetting things? That can happen in Extended Events too; we call it dropped events. In order to optimizes for server performance and help ensure that the Extended Events doesn’t block the server if to drop events that can’t be published to a buffer because the buffer is full. You can determine if events are being dropped from a session by querying the dm_xe_sessions DMV and looking at the dropped_event_count field. Aside: Should you care if you’re dropping events?Maybe not – think about why you’re collecting data in the first place and whether you’re really going to miss a few dropped events. For example, if you’re collecting query duration stats over thousands of executions of a query it won’t make a huge difference to miss a couple executions. Use your best judgment. If you find that your session is dropping events it means that the event buffer is not large enough to handle the volume of events that are being published. There are two ways to address this problem. First, you could collect fewer events – examine you session to see if you are over collecting. Do you need all the actions you’ve specified? Could you apply a predicate to be more specific about when you fire the event? Assuming the session is defined correctly, the next option is to change the MAX_MEMORY option to a larger number. Picking the right event buffer size might take some trial and error, but a good place to start is with the number of dropped events compared to the number you’ve collected. Aside: There are three different behaviors for dropping events that you specify using the EVENT_RETENTION_MODE option. The default is to allow single event loss and you should stick with this setting since it is the best choice for keeping the impact on server performance low.You’ll be tempted to use the setting to not lose any events (NO_EVENT_LOSS) – resist this urge since it can result in blocking on the server. If you’re worried that you’re losing events you should be increasing your event buffer memory as described in this section. Some events are too big to fail A less common reason for dropping an event is when an event is so large that it can’t fit into the event buffer. Even though most events are going to be small, you might find a condition that occasionally generates a very large event. You can determine if your session is dropping large events by looking at the dm_xe_sessions DMV once again, this time check the largest_event_dropped_size. If this value is larger than the size of your event buffer [remember, the size of your event buffer, by default, is max_memory / 3] then you need a large event buffer. To specify a large event buffer you set the MAX_EVENT_SIZE option to a value large enough to fit the largest event dropped based on data from the DMV. When you set this option the Extended Events engine will create two buffers of this size to accommodate these large events. As an added bonus (no extra charge) the large event buffer will also be used to store normal events in the cases where the normal event buffers are all full and waiting to be processed. (Note: This is just a side-effect, not the intended use. If you’re dropping many normal events then you should increase your normal event buffer size.) Partitioning: moving your events to a sub-division Earlier I alluded to the fact that you can configure your event session to use more than the standard three event buffers – this is called partitioning and is controlled by the MEMORY_PARTITION_MODE option. The result of setting this option is fairly easy to explain, but knowing when to use it is a bit more art than science. First the science… You can configure partitioning in three ways: None, Per NUMA Node & Per CPU. This specifies the location where sets of event buffers are created with fairly obvious implication. There are rules we follow for sub-dividing the total memory (specified by MAX_MEMORY) between all the event buffers that are specific to the mode used: None: 3 buffers (fixed)Node: 3 * number_of_nodesCPU: 2.5 * number_of_cpus Here are some examples of what this means for different Node/CPU counts: Configuration None Node CPU 2 CPUs, 1 Node 3 buffers 3 buffers 5 buffers 6 CPUs, 2 Node 3 buffers 6 buffers 15 buffers 40 CPUs, 5 Nodes 3 buffers 15 buffers 100 buffers   Aside: Buffer size on multi-processor computersAs the number of Nodes or CPUs increases, the size of the event buffer gets smaller because the total memory is sub-divided into more pieces. The defaults will hold up to this for a while since each buffer set is holding events only from the Node or CPU that it is associated with, but at some point the buffers will get too small and you’ll either see events being dropped or you’ll get an error when you create your session because you’re below the minimum buffer size. Increase the MAX_MEMORY setting to an appropriate number for the configuration. The most likely reason to start partitioning is going to be related to performance. If you notice that running an event session is impacting the performance of your server beyond a reasonably expected level [Yes, there is a reasonably expected level of work required to collect events.] then partitioning might be an answer. Before you partition you might want to check a few other things: Is your event retention set to NO_EVENT_LOSS and causing blocking? (I told you not to do this.) Consider changing your event loss mode or increasing memory. Are you over collecting and causing more work than necessary? Consider adding predicates to events or removing unnecessary events and actions from your session. Are you writing the file target to the same slow disk that you use for TempDB and your other high activity databases? <kidding> <not really> It’s always worth considering the end to end picture – if you’re writing events to a file you can be impacted by I/O, network; all the usual stuff. Assuming you’ve ruled out the obvious (and not so obvious) issues, there are performance conditions that will be addressed by partitioning. For example, it’s possible to have a successful event session (eg. no dropped events) but still see a performance impact because you have many CPUs all attempting to write to the same free buffer and having to wait in line to finish their work. This is a case where partitioning would relieve the contention between the different CPUs and likely reduce the performance impact cause by the event session. There is no DMV you can check to find these conditions – sorry – that’s where the art comes in. This is  largely a matter of experimentation. On the bright side you probably won’t need to to worry about this level of detail all that often. The performance impact of Extended Events is significantly lower than what you may be used to with SQL Trace. You will likely only care about the impact if you are trying to set up a long running event session that will be part of your everyday workload – sessions used for short term troubleshooting will likely fall into the “reasonably expected impact” category. Hey buddy – I think you forgot something OK, there are two options I didn’t cover: STARTUP_STATE & TRACK_CAUSALITY. If you want your event sessions to start automatically when the server starts, set the STARTUP_STATE option to ON. (Now there is only one option I didn’t cover.) I’m going to leave causality for another post since it’s not really related to session behavior, it’s more about event analysis. - Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Cash Application Work Queue in Oracle Receivables Release 12.1.1

    - by Robert Story
    Upcoming WebcastTitle: Cash Application Work Queue in Oracle Receivables Release 12.1.1Date: March 24, 2010Time: 10:00 am EDT, 7:00 am PDT, 14:00 GMT Product Family: E-Business Suite Receivables 12.1.1 Receipts Summary Understand the setups and processes for the Cash Application Work Queue in Release 12.1.1 and learn how to diagnose basic functional issues. This one-hour session is recommended for technical and functional users. We will be covering topics related to processing receipts efficiently, managing the work load of cash application owners and diagnosing issues. Topics will include: Description of Cash Application Work Queue Setup and Work Queue Process Dependencies and Interactions Basic Troubleshooting Steps A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • July, the 31 Days of SQL Server DMO’s - Intro

    - by Tamarick Hill
    DMO’s burst onto the SQL Server scene in 2005 and when they did they unlocked a wealth of information. I’ve became a major fan of DMO’s as they tend to simplify my troubleshooting as well as provide me with valuable information about what is going on within the SQL Server engine. I would recommend that those of you who are not familiar with DMO’s, take the time to really learn more about them. For those of you who may not be familiar with DMO’s, for the month of July, I will be writing about one DMO per day. Don’t get me wrong, I’m no DMO expert or anything like that, but I’ve worked with them enough to feel that I can give you some good information about DMO’s to help you get started with using them. During these blog sessions, I will not be providing you with any complicated queries to solve all of your SQL Server problems that you may or may not have. I will be simply introducing you to various DMO’s and illustrating what type of information they provide. After you learn more about these individually, then you will be able to join whatever DMO’s you need to pull back the information you are seeking. I hope that you all benefit in some form or fashion from my next 31 DMO postings!!! Enjoy!

    Read the article

  • PO Communication in PDF

    - by Robert Story
    Upcoming WebcastsDate: March 29, 2010 Time: 2 pm London, 9:00 am EDT, 6:00 am PDT, 13:00 GMT Click here to register for this sessionDate: March 29, 2010 Time: 9 am London, 4:00 am EDT, 1:00 am PDT, 8:00 GMT Click here to register for this session Product Family: ProcurementSummary This one-hour session is recommended for technical and functional users who would like to know about the PO Communication functionality in procurement. Topics will include: Introduction to PO PDF communication - 11.5.10 Key ConceptsPrerequisites, Scope Overview of PDF document generation PDF solution overviewTechnical Overview of PDF generation Setup steps Triggering Points of PDF generation PO Output for communication - Concurrent programEnter PO form: View DocIsupplier portal/Contracts preview Enhancements PDF Generation in Custom LayoutsAttachments in fax communicationR12 Communication Nontext Attachments through Email Customizing templates Advantages of PDF communication Troubleshooting (Tips) A short, live demonstration (only if applicable) and question and answer period will be included........ ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 20 (sys.dm_tran_locks)

    - by Tamarick Hill
    The sys.dm_tran_locks DMV is used to return active lock resources on your server. Locking is a mechanism used by SQL Server to protect the integrity of data when you have multiple users that may potentially access the same data at the same time. Let’s run a query against this DMV so we can analyze the results. SELECT * FROM sys.dm_tran_locks As we can see, its a lot of lock information returned from this DMV. I will not go into detail about each of the columns returned, but I will touch on the ones that I feel are the most important. The first column in the output is the resource_type column which tells you the type of lock a particular row represents. It could be a PAGE lock, RID, OBJECT, DATABASE, or several other lock types. The resource_database_id represents the id of the database for a particular lock resource. The resource_lock_partition column represents the ID of a lock partition. When you have a table that is partitioned, locks can be escalated to the partition level before going to a table level lock. The request_mode column gives us information about the type of lock that is being requested. From the screenshots above we see RangeS-S locks which represent a share range lock and IS locks which represent Intent Shared locks. The request_status column displays whether the lock has been granted or whether the lock is waiting to be acquired. The request_session_id  shows the session_id that is requesting the lock. This DMV is the best place to go when you need to identify the exact locks that are being held or pending for individual requests. You might need this information when you are troubleshooting severe blocking or deadlocking problems on your server. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms190345.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-19

    - by Bob Rhubart
    Discussion: Public, Private, and Hybrid Clouds A conversation about the similarities and differences between public, private, and hybrid clouds; the connection between cows, condos, and cloud computing; and what architects need to know in order to take advantage of cloud computing. (OTN ArchBeat Podcast transcript) InfoQ: Current Trends in Enterprise Mobility Interesting infographics that show current developments and major trends in enterprise mobility. Recap: EMEA User Group Leaders Meeting Latvia May 2012 Tom Scheirsen recaps the recent IOUC event in Riga. Oracle Fusion Middleware Summer Camps in Lisbon: Includes Advanced ADF Training by Oracle Product Management This is how IT people deal with the Summertime Blues. Enterprise 2.0 Conference: Building Social Business | Oracle WebCenter Blog Kellsey Ruppel shares a list of E2.0 conference sessions being presented by members of the Oracle community. Linux 6 Transparent Huge Pages and Hadoop Workloads | Structured Data Greg Rahn documents a problem. BPM Standard Edition to start your BPM project "BPM Standard Edition is an entry level BPM offering designed to help organisations implement their first few processes in order to prove the value of BPM within their own organisation." Troubleshooting ADF Security 11g Login Page Failure | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis takes a deep dive into one of the most common ADF 11g Security issues. It's Alive! - The Oracle OpenWorld Content Catalog It's what you’ve been waiting for—the central repository for information on sessions, demos, labs, user groups, exhibitors, and more. 5 minutes or less: Indexing Attributes in OID | Andre Correa Fusion Middleware A-Team blogger Andre Correa offers help for those who encounter issues when running searches with LDAP filters against OID (Oracle Internet Directory). Condos and Clouds: Thinking about Cloud Computng by Looking at Condominiums | Pat Helland In part two of the OTN ArchBeat Podcast Public, Private, and Hybrid Clouds, Oracle Cloud chief architect Mark Nelson mentions an analogy by Pat Helland that compares condos to cloud computing. After some digging I found the October 2011 presentation in which Helland explains that analogy. Thought for the Day "I have always found that plans are useless, but planning is indispensable." — Dwight Eisenhower (October 14, 1890 – March 28, 1969) Source: Quotes for Software Engineers

    Read the article

  • Using Oracle Receivables Diagnostics: How To Run, Read & Use To Troubleshoot

    - by Robert Story
    Upcoming WebcastTitle: Using Oracle Receivables Diagnostics: How To Run, Read & Use To TroubleshootDate: March 31, 2010 Time: 10:00 am EDT Product Family: Receivables Community Summary This one-hour session is recommended for functional users who want to take a more active role in the generation of Diagnostics in Oracle Receivables. This session will provide an overview of how diagnostics are structured and give some tips on how to read/analyze the output as well as some simple troubleshooting tips. Topics will include: Review of Diagnostic Catalogs in Release 11i, 12.0.x and 12.1.1How to run some of the more popular Receivables DiagnosticsHow to read and analyze diagnostic data Examine the log viewer A short, live demonstration (only if applicable) and question and answer period will be included. Click here to register for this session....... ....... ....... ....... ....... ....... .......The above webcast is a service of the E-Business Suite Communities in My Oracle Support.For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • How do I install Apache Tomcat 7?

    - by Anupesh
    Troubleshooting with apache Tomcat 7 steps should be followed .... after downloading apache-Tomcat 7 // open terminal go to that downloading file folder in which tar.gz file is still. Untar the tar file tar -xzvf filename.tar.gz move a extracted directory of apache-Tomcat7 in current path sudo mv Directory_Name(Extracted eg.apache-Tomcat 7.0.32) /usr/local/Tomcat7 to set environment variable for Tomcat7 sudo nano /usr/local/Tomcat7/bin/setenv.sh then Nano editor will open .. JAVA_HOME =/usr/lib/jvm/java-6-sun export CATALINA_OPTS="$CATALINA_OPTS -Xms128m -Xmx1024m -XX:MaxPermSize=256m after hit ctrl+X to save file to set the role and username and password. sudo nano /usr/local/Tomcat7/conf/./tomcat-users.xml <role rolename="manager-gui" /> <role rolename="manager-script" /> <role rolename="manager-jmx" /> <role rolename="manager-status" /> <user username="admin" password="admin" roles="manager-gui,manager-script,manager-jmx,manager-status"/> Ctrl+X to save file If Port no confliction comes in a way then open file to change the port no. sudo nano /usr/local/Tomcat7/conf/./server.xml then change the connector port as you want....

    Read the article

  • Setting Up IRM Test Content

    - by martin.abrahams
    A feature of the 11g IRM Server that sometimes gets overlooked is the ability to set up some test content that any IRM user can access to verify that their IRM Desktop can reach the server, authenticate successfully, and render protected content successfully. Such test content is useful for new users, and in troubleshooting scenarios. Here's how to set up some test content... In the management console, go to IRM - Administration - Test Content, as shown. The console will display a list of test content - initially an empty list. Use the Add option to specify the URL of a document or image, and define one or more labels for the test content in whichever languages your users favour. Note that you do not need to seal the image or document in order to use it as test content. Nor do you need to set up any rights for the test content. The IRM Server will handle the sealing and rights assignment automatically such that all authenticated users are authorised to view the test content. Repeat this process for as many different types of content as you would like to offer for test purposes - perhaps a Word document, a PDF document, and an image. To keep things simple the first time I did this, I used the URL of one of the images in the IRM Server's UI - so there was no problem with the IRM Server being able to reach that image. Whatever content you want to use, the IRM Server needs to be able to reach it at the URL you specify. Using Test Content Open a browser and browse to the URL that the IRM Desktop normally uses to access the IRM Server, for example: http://irm11g.oracle.com/irm_desktop If you are not sure, you can find this URL in the Servers tab of the IRM Options dialog. Go to the Test tab, and you will see your test content listed. By opening one of the items, you can verify that your IRM Desktop is healthy and that you can authenticate to the IRM Server.

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • Start synergy on boot of Ubuntu 12.04

    - by SwimBikeRun
    I am having problems getting synergy to start on boot of my Ubuntu 12.04 desktop machine. I followed the troubleshooting at Start synergy on boot? completely, but I still couldn't get synergy to start on boot up. I tried running my client starter script manually and here is what resulted: m@m-o:/etc/synergy$ cat startsynergyc #!/bin/bash /usr/bin/synergyc -n MWin1 3.22.21.145 m@m-o:/etc/synergy$ ll total 28 drwxr-xr-x 2 root root 4096 Oct 30 14:29 ./ drwxr-xr-x 143 root root 12288 Oct 30 14:31 ../ -rwxr-xr-x 1 root root 54 Oct 30 14:29 startsynergyc* -rwxr-xr-x 1 root root 67 Oct 30 14:06 startsynergys* -rw-r--r-- 1 root root 659 Oct 30 14:06 synergy.conf m@m-o:/etc/synergy$ ./startsynergyc INFO: Synergy 1.4.14 Client on Linux 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:19:35 UTC 2013 x86_64 It appears that synergy is running. This is exactly what is being told to startup with ubuntu assuming I did the lightdm modification correctly. However, I don't see any synergy running, and I certainly can't control the other computer. Btw, I changed my ip in the log from what it actually is ~ Is this a necessary precaution or are ips local to the network? What does knowing my IP allow someone to do?

    Read the article

  • Apache VERY high page load time

    - by Aaron Waller
    My Drupal 6 site has been running smoothly for years but recently has experienced intermittent periods of extreme slowness (10-60 sec page loads). Several hours of slowness followed by hours of normal (4-6 sec) page loads. The page always loads with no error, just sometimes takes forever. My setup: Windows Server 2003 Apache/2.2.15 (Win32) Jrun/4.0 PHP 5 MySql 5.1 Drupal 6 Cold fusion 9 Vmware virtual environment DMZ behind a corporate firewall Traffic: 1-3 hits/sec avg Troubleshooting No applicable errors in apache error log No errors in drupal event log Drupal devel module shows 242 queries in 366.23 milliseconds,page execution time 2069.62 ms. (So it looks like queries and php scripts are not the problem) NO unusually high CPU, memory, or disk IO Cold fusion apps, and other static pages outside of drupal also load slow webpagetest.org test shows very high time-to-first-byte The problem seems to be with Apache responding to requests, but previously I've only seen this behavior under 100% cpu load. Judging solely by resource monitoring, it looks as though very little is going on. Here is the kicker - roughly half of the site's access comes from our LAN, but if I disable the firewall rule and block access from outside of our network, internal (LAN) access (1000+ devices) is speedy. But as soon as outside access is restored the site is crippled. Apache config? Crawlers/bots? Attackers? I'm at the end of my rope, where should I be looking to determine where the problem lies?

    Read the article

  • Persistence problem when installing USB Ubuntu variant using Windows

    - by Derek Redfern
    I'm part of a project called One2One2Go - we're developing a Live USB Ubuntu variant for use in schools in Somerville, MA. We have the project files compiled into an iso, and when installed using the native Ubuntu Startup Disk Creator, the USB works fine. When installed using a tool on a Windows machine (LiLi at linuxliveusb.com or liveusb-creator at fedorahosted.org/liveusb-creator), the USB works, but does not have persistence. This happens even if the creator is specifically set to allocate an area for persistent files. When comparing files on sticks created in Windows or Ubuntu, the one file that is different is syslinux/syslinux.cfg. I have printed the contents of the file below: Installed on Windows: DEFAULT nomodset LABEL debug menu label ^debug kernel /casper/vmlinuz append boot=casper xforcevesa initrd=/casper/initrd.gz -- LABEL nomodset menu label ^nomodset kernel /casper/vmlinuz append boot=casper quiet splash nomodset initrd=/casper/initrd.gz -- LABEL memtest menu label ^Memory test kernel /install/memtest append - LABEL hd menu label ^Boot from first hard disk localboot 0x80 append - PROMPT 0 TIMEOUT 1 Installed on Ubuntu: DEFAULT nomodset LABEL debug menu label ^debug kernel /casper/vmlinuz append noprompt cdrom-detect/try-usb=true persistent boot=casper xforcevesa initrd=/casper/initrd.gz -- LABEL nomodset menu label ^nomodset kernel /casper/vmlinuz append noprompt cdrom-detect/try-usb=true persistent boot=casper quiet splash nomodset initrd=/casper/initrd.gz -- LABEL memtest menu label ^Memory test kernel /install/memtest append - LABEL hd menu label ^Boot from first hard disk localboot 0x80 append - PROMPT 0 TIMEOUT 1 For troubleshooting reasons, I changed the Windows-created syslinux.cfg to match the one created by Ubuntu and persistence worked on it. I think the problem is that on the stick created by Windows, there is no "persistent" flag, but I don't know why. Is this a problem with our disk image or with the creator? How would I go about fixing this problem? Thanks in advance for your help. Derek Redfern

    Read the article

  • OCS 2007 Access Edge Server Certificate issue

    - by BWCA
    We are currently building additional OCS 2007 R2 Access Edge Servers to handle additional capacity.  We ran into a SSL certificate issue when we were setting up the servers. Before running the steps to Deploy an Edge Server, we successfully imported our SSL certificate that we use for external access on all of the new servers.  After successfully completing the first three Deploy Edge Server steps one one of the new servers, we started working on Step 4: Configure Certificates for the Edge Server.  After selecting Assign an existing certificate from the common tasks list and clicking Next to select a certificate, there were no certificates listed as shown below.   The first thing we did was to use the Certificates mmc snap-in to review the SSL certificate information.  We noticed in the General tab that Windows does not have enough information to verify this certificate and in the Certification Path that the issuer of this certificate could not be found for the SSL certificate that we imported successfully earlier.     While troubleshooting, we learned that we could not access the URL for the certificate’s CRL to download the CRL file due to restrictive firewall rules between the new OCS 2007 R2 Access Edge Servers and the Internet. After modifying the firewall rules, we were able to download the CRL file and when we reran Step 4 to assign an existing certificate, the certificate was listed.

    Read the article

  • Copy wrongs and Copyright

    - by Tony Davis
    Recently, a Chinese blog website copied, wholesale and without permission, a Simple-Talk article on troubleshooting locking and blocking. Our initial reaction was exasperation and anger, tempered slightly by the fact that there was, at the top, a clear link to the original, and the book from which it was extracted. On the day the copy was posted, our original article saw a 30K spike in visits, so the site clearly has a substantial following! This made us pause for thought. Indeed, we wondered whether it might not be more profitable, and certainly more enjoyable, to notify the offender of similar content and serve a "put up" notice, rather than the usual DMCA "take down" . The DMCA request, issued to protect our and our authors' assets, is a necessary but tiresome, chore. So often, simple communication and negotiation could have averted the need for it. We are, after all, in the business of presenting knowledge, information and help to the SQL Server Community. If only they had asked! Of course, one's attitude changes according to the motivation behind the copying of content. One of the motivations seems to be pure vanity; they do it to try to enhance their CV, or their company's expertise, by pretending to expertise they don't possess. There is a class of plagiariser, however, that is doing it purely for money, getting advertising revenue by attracting hapless readers to their site. Not content with stealing content, sites can invest in services that provide 'load-testing' for websites that is so realistic that even the search engines can be fooled. Stolen content, fake visitors, swindled advertisers. Zero-tolerance is really the only way of dealing with plagiarism, and action will only be completely effective once Bing, Google, and the other search engines strike out from their listings the rogue sites that refuse to take down plagiarised content. It is, after all in everyone else's interests. Cheers, Tony.

    Read the article

  • My first encounter with SmartAssembly

    - by Peter Larsson
    Let me start by writing I am a supreme VB6 programmer, but I have very little experience with VB.Net, so I think I still need some more time learning SmartAssembly. SmartAssembly make obfuscating and merging dll files a piece of cake! With it's simple, straight forward and clean GUI I did make my tests work. With other obfuscators like Xenocode, Salamander etc which lets you (and in some cases forces you) control more advanced settings, you really have to know what you are doing. Especially when it comes to protecting code that uses external dependencies. My most annoying experience is that if you start checking radio buttons and activating different obfuscating features in SmartAssembly, you will end up breaking your working code as well, if you like me is not that experienced and don't know what you’re doing. SmartAssembly have some troubleshooting information on their website which explains why the application will fail in some scenarios. So why not extend these checks in some deeper analyzing stage on the dll's? By doing that I think more people could get fully functional dll's out of the box instead of trying different settings and then test the protected dll and see if it's working or not. //Peter

    Read the article

  • hdmi AC-3 audio broke after upgrading from 11.10 to 12.04.3

    - by Jim LastName
    I just updated my MythBuntu 11.10 to 12.04.3. Now, when I try to play 5.1 content (ripped DVD), my TV (and receiver) plays a "chattering" sound. I check my receiver and the digital dolby light isn't on--it's in PCM mode. So, either the audio is getting sent as AC-3, but the TV and receiver think it's PCM or the AC-3 audio got converted to multichannel PCM and they can't handle it. My setup: hdmi cable from htpc to TV. TV has an s/pdif output to my receiver. I know TV sends AC-3 audio out correctly because I see digital dolby light come on when I view a digital TV channel and PCM come on when I view an old analog channel. I can connect s/pdif from my htpc to my receiver and the digital dolby light comes on and it can decode the audio just fine. It's just not sending it right over hdmi. Now for some hints to the issue: I noticed in MythTV audio setup when I select alsa:hdmi.... the description only lists 2 channel PCM audio capability. speaker-test -Dhdmi:PCH -c6 errors about a bad channel count (only -c2 works). Finally, I tried vlc and it does the same chattering sound. These all make me think this isn't a MythTV issue, it's something lower than that. I think the best way to troubleshoot this is to start at the drivers and check each layer, one at a time all the way to alsa. I just don't know what the layers are and how to do it. So, I need to find some audio troubleshooting guide to assist me. Or, if one doesn't exist, I'd appreciate some steps. Thanks much, Jim

    Read the article

  • What's the difference between General Ledger Transfer Program, Create Accounting and Submit Accounting?

    - by Oracle_EBS
    In Release 12, the General Ledger Transfer Program is no longer used. Use Create Accounting or Submit Accounting instead. Submit Accounting spawns the Revenue Recognition Process. The Create Accounting program does not. So if you create transactions with rules, then you would want to run Submit Accounting Process to spawn Revenue Recognition to create the distribution rows, which Create Accounting is then spawned to process to the GL. Create Accounting Submit Accounting Short Name for Concurrent Program XLAACCPB ARACCPB Specific to Receivables No Yes Runs Revenue Recognition automatically No Yes Can be run real-time for one Transaction/Receipt at a time Yes No Spawns the following Programs 1) XLAACCPB module: Create Accounting 2) XLAACCUP module: Accounting Program 3) GLLEZL module: Journal Import 1) ARTERRPM module: Revenue Recognition Master Program 2) ARTERRPW module: Revenue Recognition with parallel workers - could be numerous 3) ARREVSWP - Revenue Contingency Analyzer 4) XLAACCPB module: Create Accounting 5) XLAACCUP module: Accounting Program 5) GLLEZL module: Journal Import Keep in mind, Reports owned by application 'Subledger Accounting' cannot be seen when running the report from Receivables responsibility. You may want to request your sysadmin to attach the following SLA reports/programs to your AR responsibility as you will need these for your AR closing process: XLAPEXRPT : Subledger Period Close Exception Report - shows transactions in status final, incomplete and unprocessed. XLAGLTRN : Transfer Journal Entries to GL - transfers transactions in final status and manually created transactions to GL To add reports/programs owned by application 'Subledger Accounting' (Subledger Period Close Exception Report and Transfer Journal Entries to GL_ Add to the request group as follows: Let's use Subledger Accounting Report XLATBRPT: Open Account Balances Listing Report as an example. Responsibility: System Administrator Navigation: Security > Responsibility > Define Query the name of your Receivables Responsibility and note the Request Group (ie. Receivables All) Navigation: Security > Responsibility > Request Query the Request Group Go to Request Zone and Click on Add Record Enter the following: Type: Program Name: Open Account Balances Listing Save Responsibility: Receivables Manager Navigation: Control > Requests > Run In the list of values you should now see 'Open Account Balances Listing' report References: Note: 748999.1 How to add reports for application subledger accounting to receivables responsibiilty Note: 759534.1 R12 ARGLTP General Ledger Transfer Program Errors Out Note: 1121944.1 Understanding and Troubleshooting Revenue Recognition in Oracle Receivables

    Read the article

  • Oracle Access Manager 11.1.2 Certified with E-Business Suite 12

    - by Elke Phelps (Oracle Development)
    I am happy to announce that Oracle Access Manager 11gR2 (11.1.2) is now certified with E-Business Suite Releases 12.0.6 and 12.1. If you are implementing single sign-on for the first time, or are an existing Oracle Access Manager user, you may integrate with Oracle Access Manager 11gR2 using Oracle Access Manager WebGate and Oracle E-Business Suite AccessGate. Supported Architecture and Release Versions Oracle Access Manager 11.1.2 Oracle E-Business Suite Release 12.0.6, 12.1.1+ Oracle Identity Management 11.1.1.5, 11.1.1.6 Oracle Internet Directory 11.1.1.6 Oracle WebLogic Server 10.3.0.5+ What's New In This Oracle Access Manager 11gR2 Integration? Simplified integration: We've simplified the instructions and cut the number of pages, while adding clarity to the steps. Automation of configuration steps:  We've automated some of the required configuration steps. This is the first phase of automation and diagnostics that are part of our roadmap for this integration. Use of default OAM Login page: We are reducing the required troubleshooting by delivering the default OAM Login page for the integration. A custom login page can still be created by using Oracle Access Manager. Use of the Detached Credential collector in a Demilitarized Zone: We have certified the Detached Credential collector as part of a DMZ configuration. This will enhance the security of the underlying Oracle Access Manager and E-Business Suite components, which will now be required only within a company's intranet.   Choosing the Right Architecture Our previously published blog article and support note with single sign-on recommended and certified integration paths has been updated to include Oracle Access Manager 11gR2: Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Note 1388152.1) Other References Integrate with Oracle Access Manager 11gR2 (11.1.2) using Oracle E-Business Suite AccessGate (Note 1484024.1) Overview of Single Sign-On Integration Options for Oracle E-Business Suite (Note 1388152.1) Related Articles Understanding Options for Integrating Oracle Access Manager with E-Business Suite Why Does E-Business Suite Integration with OAM Require Oracle Internet Directory? In-Depth: Using Third-Party Identity Managers with E-Business Suite Release 12

    Read the article

  • Management Reporter Installation – Lessons Learned Part II - Dynamics GP

    - by Ryan McBee
    After feeling pretty good about my deployment skills of Management Reporter for Dynamics GP a few weeks ago, I ran into two additional lessons learned that I wanted to share. First, on another new deployment, I got the error shown below which says “An error occurred while creating the database.  View the installation log for additional information.”  This problem initially pointed me to KB 2406948 which did not provide resolution. After several hours of troubleshooting, I found there is an issue if the defaults database locations in SQL Server are set to the root of a drive. You will want to set the default to something like the following to get it installed; C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA.  My default database locations for the data and log files were indeed sitting on the H:\ and I:\ drives. To change this property in your SQL Server Instance you need to open SQL Server Management Studio, right click on the server, and choose properties and then database settings. When I initially got the error, I briefly considered creating the ManagementReporter database by hand, but experience tells me that would have created more headaches down the road. The second problem I ran into with this particular deployment of Management Reporter happened when I started the FRx conversion utility.  The errors reads “The ‘Microsoft.ACE.OLEDB.12.0’ provider is not registered on the local machine. I had a suspicion that this error was related to the fact FRx uses outdated technology and I happened to be on a new install of Server 2008 R2.  A knowledge base search quickly pointed me to KB 2102486. The resolution for this Management Reporter issue was to install the Microsoft Access Database Engine Redistributable, by following the site below. http://www.microsoft.com/downloads/details.aspx?familyid=C06B8369-60DD-4B64-A44B-84B371EDE16D&displaylang=en

    Read the article

  • Audio not working

    - by user3215
    Anybody could help me in troubleshooting audio problem on ubutnu 9.04 desktop edition?. For some reason I've to keep this os not upgraded and I'm trying to fix the audio problem on this for months. It works well on upgraded version(9.10,10.04) but not on jaunty. aplay -l: **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: ALC883 Analog [ALC883 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC883 Digital [ALC883 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 lsmod | grep snd: snd_hda_intel 436148 7 snd_pcm_oss 46336 0 snd_mixer_oss 22656 1 snd_pcm_oss snd_pcm 83076 4 snd_hda_intel,snd_pcm_oss snd_seq_dummy 10756 0 snd_seq_oss 37760 0 snd_seq_midi 14336 0 snd_rawmidi 29696 1 snd_seq_midi snd_seq_midi_event 15104 2 snd_seq_oss,snd_seq_midi snd_seq 56880 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event snd_timer 29704 2 snd_pcm,snd_seq snd_seq_device 14988 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq snd 62756 21 snd_hda_intel,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 15200 1 snd snd_page_alloc 16904 2 snd_hda_intel,snd_pcm cat /proc/asound/cards: 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xe1280000 irq 16 cat /proc/asound/version: Advanced Linux Sound Architecture Driver Version 1.0.18rc3. vim /etc/modules: # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. lp Audio Settings:

    Read the article

  • Keep your Root Authorities up to date

    - by John Breakwell
    Originally posted on: http://geekswithblogs.net/Plumbersmate/archive/2013/06/20/keep-your-root-authorities-up-to-date.aspxBy default, Windows will automatically update it’s internal list of trusted root authorities as long as the Update Root Certificates function is installed. This should be enabled by default and takes manual intervention to remove it. With this component enabled, the following happens: If you are presented with a certificate issued by an untrusted root authority, your computer will contact the Windows Update Web site to see if Microsoft has added the CA to its list of trusted authorities. If it has been added to the Microsoft list of trusted authorities, its certificate will automatically be added to your trusted certificate store. If the component is not installed and a certificate from an untrusted CA is encountered then the following text will be seen: This is an inconvenience for the person browsing the site as they need to click to continue. Applications, though, will be unable to proceed and will throw an exception. Example: ERROR_WINHTTP_SECURE_FAILURE 12175 (0x00002F8F) One or more errors were found in the Secure Sockets Layer (SSL) certificate sent by the server. If you look at the certificate’s properties, you can see the “Issued by:” value:   This must match a Trusted Root Certificate Authority in the current user’s certificate store.   So turn on automatic updating of trusted root authority certificates. For Windows Vista and above, this option is controlled through Group Policy. See the “To Turn Off the Update Root Certificates Feature by Using Group Policy” section of the following Technet article: Certificate Support and Resulting Internet Communication in Windows Vista If Windows Update is a blocked site then download and deploy the latest pack of root certificates from Microsoft: Update for Root Certificates For Windows XP [May 2013] (KB931125)   Failing that, find a machine that has the latest root certificates installed and export them from there: Open up the Certificates console. Right-click the required Trusted Root Certificate Authority certificate Choose Export from “All Tasks” to open up the Certificate Export Wizard Choose an export file format – DER should be fine Provide a file name and complete the export. Move the file to the machine that’s missing the certificate Right-click the file and choose “Install Certificate” to open up the Certificate Import Wizard Allow the wizard to automatically select the certificate store and complete the import On a side note, for troubleshooting certificate issues it can be helpful to clear the SSL state:

    Read the article

  • What are Information Centers?

    - by user12244613
    Information Centers are similar to product pages in the Oracle Sun System Handbook Many customers like the Oracle Sun System Handbook concept of a home page with all the product attributes, troubleshooting etc. access from a single home page. This concept is now available for a range of Oracle Solaris, Systems, and Storage products. The Information Center for each product covers areas such as: Overview, Hot Topics, Patching and Maintenance. The Information Center pages are dynamically generated each night to ensure the latest content is available to you. Here are the top Solaris, Systems, and Storage Information Centers: Oracle Explorer Data Collector Oracle Solaris 10 Live Upgrade Oracle Solaris 11 Booting Information Center Oracle Solaris 11 Desktop and Graphics Information Center Oracle Solaris 11 Image Packaging System (IPS) Information Center Oracle Solaris 11 Installation Information Center Oracle Solaris 11 Product Information Center Oracle Solaris 11 Security Information Center Oracle Solaris 11 System Administration Information Center Oracle Solaris 11 Zones Information Center Oracle Solaris Crash Analysis Tool(SCAT) - Information Center Oracle Solaris Cluster Information Center Oracle Solaris Internet Protocol Multipathing (IPMP) Information Center Oracle Solaris Live Upgrade Information Center Oracle Solaris ZFS Information Center Oracle Solaris Zones Information Center CMT T1000/T2000 and Netra T2000 CMT T5120/T5120/T5140/T5220/T5240/T5440 Systems M3000/M4000/M5000/M8000/M9000-32/M9000-64 Management and Diagnostic Tools for Oracle Sun Systems Netra CT410/810 and Netra CT900 Network-Attached Storage (NAS) Oracle Explorer Data Collector Oracle VM Server for SPARC (LDoms) Pillar Axiom 600 SL3000 Tape Library Sun Disk Storage Patching and Updates Sun Fire 3800/4800/4810/6800/E2900/E4900/E6900/V1280 - Netra 1280/1290 Sun Fire 12K/15K/E20K/E25K Sun Fire X4270 M2 Server Sun x86 Servers T3 and T4 Systems Tape Domain Firmware V210/V240/V440/V215/V245/V445 Servers VSM (VTSS/VLE/VTCS)

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >