Search Results

Search found 26808 results on 1073 pages for 'storing information'.

Page 82/1073 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Use Enterprise Manager Cloud Control to monitor OBIEE 11.1.1.7.x Dashboards

    - by Torben Hein -Oracle
    (in via Senthil )  If your OBIEE 11.1.1.7.x is set up in the following way: The OBIEE repository is an Oracle Database and is set up as a data warehouse Usage tracking is enabled in OBIEE. ( For information on how to enable usage tracking in OBIEE, refer to the following link: Setting Up Usage Tracking in Oracle BI 11g ) The OBIEE instance is discovered in EM Cloud Control. ( For information on how to discover an OBIEE instance in Cloud Control, refer to the following link: Discovering Oracle Business Intelligence Instance and Oracle Essbase Targets ) The OBIEE repository is discovered in EM Cloud Control. ( For information on how to discover an Oracle database, refer to the following link: Discovering, Promoting, and Adding Database Targets ) then we've got news for you: KM Article:  OBIEE 11g: How To Diagnose Slowly Performing Dashboards using Enterprise Manager Cloud Control (Doc ID 1668236.1) takes you step by step through monitoring the SQL query performance behind your OBIEE dashboard. This Diagnostic approach ... .. will help you piece together information on BI dashboard performance, e.g. processing time from the different layers of the BI system including the repository. .. should enable you to get to the bottom of slow dashboards by using the wealth of information available in EM Cloud Control on OBIEE and Oracle DB. .. will NOT fix any performance issues on its own, but will help identify bottlenecks while processing dashboard requests. (layout and post: Torben, authorized: Lia)

    Read the article

  • Accenture Foundation Platform for Oracle

    - by Michelle Kimihira
    Accenture Foundation Platform for Oracle (AFPO) helps clients accelerate deployments on Oracle Fusion Middleware. Version 5 is the latest version and added exciting new capabilities including: integration with Oracle Application Development Framework (ADF) Mobile and Oracle WebCenter. For more information, visit Accenture's Website. Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • WoW lua: Getting quest attributes before the QUEST_DETAIL event

    - by Matt DiTrolio
    I'd like to determine the attributes of a quest (i.e., information provided by functions such as QuestIsDaily and IsQuestCompletable) before the player clicks on the quest detail. I'm trying to write an add-on that handles accepting and completing of daily quests with a single click on the NPC, but I'm running into a problem whereby I can't find out anything about a given quest unless the quest text is currently being displayed, defeating the purpose of the add-on. Other add-ons of this nature seem to be getting around this limitation by hard-coding information about quests, an approach I don't much like as it requires constant maintenance. It seems to me that this information must be available somehow, as the game itself can properly figure out which icon to display over the head of the NPC without player interaction. The only question is, are add-on authors allowed access to this information? If so, how? EDIT: What I originally left out was that the situations I'm trying to address are when: An NPC has multiple quests The quest detail is not the first thing that shows up upon right-click Otherwise, the situation is much simpler, as I have the information I need provided immediately.

    Read the article

  • WebLogic Server–Use the Execution Context ID in Applications–Lessons From Hansel and Gretel

    - by james.bayer
    I learned a neat trick this week.  Don’t let your breadcrumbs go to waste like Hansel and Gretel did!  Keep track of the code path, logs and errors for each request as they flow through the system.  Earlier this week an OTN forum post in the WLS – General category by Oracle Ace John Stegeman asked a question how to retrieve the Execution Context ID so that it could be used on an error page that a user could provide to a help desk or use to check with application administrators so they could look up what went wrong.  What is the Execution Context ID (ECID)?  Fusion Middleware injects an ECID as a request enters the system and it says with the request as it flows from Oracle HTTP Server to Oracle Web Cache to multiple WebLogic Servers to the Oracle Database. It’s a way to uniquely identify a request across tiers.  According to the documentation it’s: The value of the ECID is a unique identifier that can be used to correlate individual events as being part of the same request execution flow. For example, events that are identified as being related to a particular request typically have the same ECID value.  The format of the ECID string itself is determined by an internal mechanism that is subject to change; therefore, you should not have or place any dependencies on that format. The novel idea that I see John had was to extend this concept beyond the diagnostic information that is captured by Fusion Middleware.  Why not also use this identifier in your logs and errors so you can correlate even more information together!  Your logging might already identify the user, so why not identify the request so you filter down even more.  All you need to do inside of WebLogic Server to get ahold of this information is invoke DiagnosticConextHelper: weblogic.diagnostics.context.DiagnosticContextHelper.getContextId() This class has other helpful methods to see other values tracked by the diagnostics framework too.  This way I can see even more detail and get information across tiers. In performance profiling, this can be very handy to track down where time is being spent in code.  I’ve blogged and made videos about this before.  JRockit Flight Recorder can use the WLDF Diagnostic Volume in WLS 10.3.3+ to automatically capture and correlate lots of helpful information for each request without installing any special agents and with the out-of-the-box JRockit and WLS settings!  You can see here how information is displayed in JRockit Flight Recorder about a single request as it calls a Servlet, which calls an EJB, which gets a DB connection, which starts a transaction, etc.  You can get timings around everything and even see the SQL that is used. http://download.oracle.com/docs/cd/E21764_01/web.1111/e13714/using_flightrecorder.htm#WLDFC480 Recent versions of the WLS console also are able to visualize this data too, so it works with other JVMs besides JRockit when you turn on WLDF instrumentation. I wrote a little sample application that verified to myself that the ECID did actually cross JVM boundaries.  I invoked a Servlet in one JVM, which acted as an EJB client to Stateless Session Bean running in another JVM.  Each call returned the same ECID.  You need to turn on WLDF Instrumentation for this to work otherwise the framework returns null.  I’m glad John put me on to this API as I have some interesting ideas on how to correlate some information together.

    Read the article

  • Server Side Developer Prerequisites

    - by Jking
    I am new to server side development and am currently learning node.js. What sort of networking information should I be familiar with to allow for a smooth learning curve with server side development. Could anyone provide resources pertaining to the information required to get into server programming? To give you a better idea of my standpoint: I do not know how a server interacts with a database [Q: How does a NoSQL database, or database in general, communicate with a server?] I am unsure of how a web stack works [Q: I have heard of LAMP but do not know how Apache, MySQL, and PHP interact. Hopefully this applies to other stacks as well. How do the components of a stack work together? Also, is a MEAN stack an alternative, or is it completely irrelevant to this] I have trivial knowledge of internet protocol [however extremely inefficient][Q: What resources are beneficial when learning about networking, and how much/what knowledge should I acquire to program on the server side] I am unsure of what I am unsure of concerning networking information necessary to start development Information on how the client-server model works would be greatly appreciated

    Read the article

  • WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256

    - by user702295
    WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256 I am running my engine on Linux and am receiving an intermittent message "WARNING Retrying bulk insert for file: sqlldr due to communication Error: 256" The engine seems to have completed successfully, but it is not clear if this error caused some of the forecast to not complete. It is also not clear what caused the error. Generally if you see only the WARNING of it, it means that next retries of the same load request have eventually succeeded and so the run a a whole is not affected. In order to know more about what happens, look for .log/.bad files left in the engines bin directory or possibly a quote of them within the specific engine log that had the issue.  The sqlnet.log file may also have some information about it and perhaps at the database server side there may be some log/alert regarding what happened.  Look at the alert.log. In general it could be that the database server/network was over loaded at the time and somehow the connection was rejected/failed/aborted either due to specific setting on concurrent connections/sessions or inadvertently due to glitch in network/os/hardware. If this repeats and becomes more frequent during the run you should look further into it as mentioned above. You can also track this using either SQL*Trace or java.util.logging.  - Globally enable logging by setting the oracle.jdbc.Trace system property java -Doracle.jdbc.Trace=true - Client Side Tracing: Your SQLNET.ORA file should contain the following lines to produce a client side trace file: trace_level_client = 10 trace_unique_client = on trace_file_client = sqlnet.trc trace_directory_client = <path_to_trace_dir> Server Side Tracing: To enable server side tracing, use the following parameters: trace_level_server = 10 trace_file_server = server.trc trace_directory_server = <path_to_trace_dir> Tracing Levels: The following values can be used for TRACE_LEVEL* parameters:     16 or SUPPORT — WorldWide Customer Support trace information     10 or ADMIN — Administration trace information     4 or USER — User trace information     0 or OFF — no tracing, the default Additional information is readily available via the web.

    Read the article

  • Webcast Tomorrow: Securing the Cloud for Public Sector

    - by Darin Pendergraft
    Securing the Cloud for Public Sector Click here, to register for the live webcast. Cloud computing offers government organizations tremendous potential to enhance public value by helping organizations increase operational efficiency and improve service delivery. However, as organizations pursue cloud adoption to achieve the anticipated benefits a common set of questions have surfaced. “Is the cloud secure? Are all clouds equal with respect to security and compliance? Is our data safe in the cloud?” Join us December 12th for a webcast as part of the “Secure Government Training Series” to get answers to your pressing cloud security questions and learn how to best secure your cloud environments. You will learn about a comprehensive set of security tools designed to protect every layer of an organization’s cloud architecture, from application to disk, while ensuring high levels of compliance, risk avoidance, and lower costs. Discover how to control and monitor access, secure sensitive data, and address regulatory compliance across cloud environments by: providing strong authentication, data encryption, and (privileged) user access control to ensure that information is only accessible to those who need it mitigating threats across your databases and applications protecting applications and information – no matter where it is – at rest, in use and in transit For more information, access the Secure Government Resource Center or to speak with an Oracle representative, please call1.800.ORACLE1. LIVE Webcast Securing the Cloud for Public Sector Date: Wednesday, December 12, 2012 Time: 2:00 p.m. ET Visit the Secure Government Resource CenterClick here for information on enterprise security solutions that help government safeguard information, resources and networks. ACCESS NOW Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy Statement

    Read the article

  • What is new in Oracle SOA Suite 11g R1 PS6? by Shanny Anoep

    - by JuergenKress
    Oracle has released a new version 11.1.1.7.0 for their Oracle Fusion Middleware product line. This version includes Patch Set #6 (PS6) for Oracle SOA Suite 11g R1, with a big list of improvements and fixes for each component in that suite. In this post we will highlight some of the interesting updates with regards to troubleshooting, performance, reliability and scalability. Infrastructure/Purging scripts Database growth is a common problem for large-scale Oracle SOA Suite deployments. Oracle already provides multiple purging strategies for the SOA Suite runtime database. This patch set includes two new scripts for purging most of the runtime data: Table Recreation Script (TRS): This script can be used to reclaim as much database space as possible, while still retaining the open instances. It can be used as a corrective action for databases that grew excessively, for example when purging was not performed at all. This should be used as a single corrective action only; the script does not replace the normal purging scripts. Truncate script: Remove all records from the SOA Suite runtime tables without dropping the tables. This script can be used for cloning SOA Suite environments without copying the instance data, or for recreating test scenarios by cleaning all the runtime data. The Oracle SOA Suite Administrator's guide contains a table with the available purging strategies. Diagnostic dumps Using WLST you could already dump diagnostic information about various components of the SOA Suite. This version adds support to retrieve more information on BPEL and Adapters from the command-line. Diagnostic dumps for BPEL New diagnostic dumps are available for BPEL to get information on thread pools, average processing time for BPEL components, and average waiting times for asynchronous instances. This information can be very useful for performance analysis or troubleshooting. With WLST this information can be retrieved from the command-line and included for monitoring or reporting. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Suite PS6,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Microsoft Codename Dallas

    - by kaleidoscope
    Dallas is Microsoft’s Information Service offering which allows developers and information workers to find, acquire and consume published datasets and web services. Users subscribe to datasets and web services of interest and can integrate the information into their own applications via a standardized set of API’s. Data can also be analyzed online using the Dallas Service Explorer or externally using the Power Pivot Add-In for Excel. We can explore all the datasets and subscribe to the catalog for using the data. Dallas Developer Portal https://www.sqlazureservices.com More information can be found at:      http://channel9.msdn.com/learn/courses/Azure/Dallas/IntroToDallas/Overview/   Lokesh, M

    Read the article

  • Integrating with Oracle Fusion Applications: Discovering Integration Artifacts

    - by Lionel Dubreuil
    Oracle Enterprise Repository serves as the core element to the Oracle SOA Governance solution. An industry-leading metadata repository, Oracle Enterprise Repository provides a solid foundation for delivering governance throughout the service-oriented architecture (SOA) lifecycle by acting as the single source of truth for information surrounding SOA assets and their dependencies. For Fusion Applications, the use of OER has been extended to include other integration asset types such as interface tables and other technical information such as data models, tables, views, lookups, profile options, et cetera. E-Business Suite users familiar with iRepository or eTRM will recognize the functionality in Fusion Applications OER. Oracle Enterprise Repository for Fusion Applications provides a common catalog of technical information, searchable using many different mechanisms. Customers can locate technical information by the name, description or keyword of the information they are looking for. They can also search by the type of asset they are trying to locate and/or where the asset sits in the product taxonomy. They can also see the how the asset dances in the choreography of some illustrative co-existence scenarios. These scenarios are laid out as both functional flow diagrams as well as technical interaction diagrams. Rajesh Raheja, software architect at Oracle, has recently posted an article on this topic: visibility and control are the key tenets to SOA governance, and the first step in integrating with Oracle Fusion Applications is to find out what are the integration options available. Oracle Enterprise Repository, an industry-leading metadata repository, provides this visibility. You can find his full blog post here.

    Read the article

  • Top Fusion Apps User Experience Guidelines & Patterns That Every Apps Developer Should Know About

    - by ultan o'broin
    We've announced the availability of the Oracle Fusion Applications user experience design patterns. Developers can get going on these using the Design Filter Tool (or DeFT) to select the best pattern for their context of use. As you drill into the patterns you will discover more guidelines from the Applications User Experience team and some from the Rich Client User Interface team too that are also leveraged in Fusion Apps. All are based on the Oracle Application Development Framework components. To accelerate your Fusion apps development and tailoring, here's some inside insight into the really important patterns and guidelines that every apps developer needs to know about. They start at a broad Fusion Apps information architecture level and then become more granular at the page and task level. Information Architecture: These guidelines explain how the UI of an Oracle Fusion application is constructed. This enables you to understand where the changes that you want to make fit into the oveall application's information architecture. Begin with the UI Shell and Navigation guidelines, and then move onto page-level design using the Work Areas and Dashboards guidelines. UI Shell Guideline Navigation Guideline Introduction to Work Areas Guideline Dashboards Guideline Page Content: These patterns and guidelines cover the most common interactions used to complete tasks productively, beginning with the core interactions common across all pages, and then moving onto task-specific ones. Core Across All Pages Icons Guideline Page Actions Guideline Save Model Guideline Messages Pattern Set Embedded Help Pattern Set Task Dependent Add Existing Object Pattern Set Browse Pattern Set Create Pattern Set Detail on Demand Pattern Set Editing Objects Pattern Set Guided Processes Pattern Set Hierarchies Pattern Set Information Entry Forms Pattern Set Record Navigation Pattern Set Transactional Search and Results Pattern Group Now, armed with all this great insider information, get developing some great-looking, highly usable apps! Let me know in the comments how things go!

    Read the article

  • Oracle Knowledge Courses

    - by mseika
    Oracle Knowledge products offer simple and convenient ways for users to access knowledge contained in corporate information stores. With Oracle Knowledge Training, you learn how to utilize tools that improve customer service and satisfaction by helping customers find more relevant answers to questions online or from a service agent guided by a scalable knowledge management platform. The following courses have been scheduled at Oracle in Utrecht: Oracle Knowledge Overview Rel 8.5 (1 day) Learn the technical architecture of Oracle Knowledge at a high-level and the key technologies including InfoCenter, iConnect, Search, Information Manager, Answerflow and Analytics. Dates: to be scheduled Knowledge Technical Architecture and Configuration Rel 8.5 (5 days) Learn to implement and maintain Oracle Knowledge’s core technologies through hands-on exercises including Intelligent Search, Information Manager, iConnect, AnswerFlow and Analytics. Dates: 13-17 January 2014 (afternoon/evening) Location: Live Virtual Class Knowledge Content Administration Rel 8.5 (2 days) Learn to implement, use and manage knowledge and content creation with Oracle Knowledge Information Manager. Dates: 4-5 December 2013 Location: Utrecht, The Netherlands Knowledge Analytics Rel 8.5 (1 day) Learn KPI analyses and how to close gaps using reports and tools provided in Oracle Business Intelligence Enterprise Edition. Dates: 6 December Location: Utrecht, The Netherlands Remember: your OPN discount is always applied to the standard prices shown on the Oracle University web pages. For assistance in booking, scheduling requests and more information contact the Education Service Desk: eMail: [email protected] Telephone: +31 30 66 27 675

    Read the article

  • Theory Of A Weird Thought - Forms Submission

    - by user2738336
    In theory, if you were to open two computers that were perfectly synced together on a website that has a form. This form has fields where say for example the username has to be unique. Assuming both computers have the same information on the form, and in theory let's say that the submit button was pressed at the same time, and that these two computers have the exact same build and internet speed and the same response time from the server, whose information would be submitted to the database and whose information would be denied knowing the username field is unique.

    Read the article

  • Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution

    - by Kellsey Ruppel
    Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution CUSTOMER AND PARTNER INFORMATION Customer Name – Siemens AG, Sector Healthcare Customer Revenue – 73,515 Billion Euro (2011, Siemens AG total) Customer Quote – “The realization of our complex requirements within a very short amount of time was enabled through the competent implementation partner Sapient, who fully used the  very broad scope of standard functionality provided in the Oracle WebCenter Portal, and the management of customer services, who continuously supported the project setup. ” – Joerg Modlmayr, Project Manager, Healthcare Customer Service Portal, Siemens AG The Siemens Healthcare Sector is one of the world's largest suppliers to the healthcare industry and a trendsetter in medical imaging, laboratory diagnostics, medical information technology and hearing aids. Siemens offers its customers products and solutions for the entire range of patient care from a single source – from prevention and early detection to diagnosis, and on to treatment and aftercare. By optimizing clinical workflows for the most common diseases, Siemens also makes healthcare faster, better and more cost-effective. To ensure greater transparency, increased efficiency, higher user acceptance, and additional services, Siemens AG, Sector Healthcare, replaced several existing legacy portal solutions that could not meet the company’s future needs with Oracle WebCenter Portal. Various existing portal solutions that cannot meet future demands will be successively replaced by the new central service portal, which will also allow for the efficient and intuitive implementation of new service concepts.  With Oracle, doctors and hospitals using Siemens medical solutions now have access to a central information portal that provides important information and services at just the push of a button.  Customer Name – Siemens AG, Sector Healthcare Customer URL – www.siemens.com Customer Headquarters – Erlangen, Germany Industry – Industrial Manufacturing Employees – 360,000  Challenges – Replace disparate medical service portals to meet future demands and eliminate an  unnecessarily high level of administrative work caused by heterogeneous installations Ensure portals meet current user demands to improve user-acceptance rates and increase number of total users Enable changes and expansion through standard functionality to eliminate the need for reliance on IT and reduce administrative efforts and associated high costs Ensure efficient and intuitive implementation of new service concepts for all devices and systems Ensure hospitals and clinics to transparently monitor and measure services rendered for the various medical devices and systems  Increase electronic interaction and expand services to achieve a higher level of customer loyalty Solution –  Deployed Oracle WebCenter Portal to ensure greater transparency, and as a result, a higher level of customer loyalty  Provided a centralized platform for doctors and hospitals using Siemens’ medical technology solutions that provides important information and services at the push of a button Reduced significantly the administrative workload by centralizing the solution in the new customer service portal Secured positive feedback from customers involved in the pilot program developed by design experts from Oracle partner Sapient. The interfaces were created with customer needs in mind. The first survey taken shortly after implementation came back with 2.4 points on a scale of 0-3 in the category “customer service portal intuitiveness level” Met all requirements including alignment with the Siemens Style Guide without extensive programming Implemented additional services via the portal such as benchmarking options to ensure the optimal use of the Customer Device Park Provided option for documentation of all services rendered in conjunction with the medical technology systems to ensure that the value of the services are transparent for the decision makers in the hospitals  Saved and stored all machine data from approximately 100,000 remote systems in the central service and information platform Provided the option to register errors online and follow the call status in real-time on the portal Made  available at the push of a button all information on the medical technology devices used in hospitals or clinics—from security checks and maintenance activities to current device statuses Provided PDF format Service Performance Reports that summarize information from periods of time ranging from previous weeks up to one year, meeting medical product law requirements  Why Oracle – Siemens AG favored Oracle for many reasons, however, the company ultimately decided to go with Oracle due to the enormous range of functionality the solutions offered for the healthcare sector.“We are not programmers; we are service providers in the medical technology segment and focus on the contents of the portal. All the functionality necessary for internet-based customer interaction is already standard in Oracle WebCenter Portal, which is a huge plus for us. Having Oracle as our technology partner ensures that the product will continually evolve, providing a strong technology platform for our customer service portal well into the future,” said Joerg Modlmayr project manager, Healthcare Customer Service Portal, Siemens AG. Partner Involvement – Siemens AG selected Oracle Partner Sapient because the company offered a service portfolio that perfectly met Siemens’ requirements and had a wealth of experience implementing Oracle WebCenter Portal. Additionally, Sapient had designers with a very high level of expertise in usability—an aspect that Siemens considered to be of vast importance for the project.  “The Sapient team completely met all our expectations. Our tightly timed project was completed on schedule, and the positive feedback from our users proves that we set the right measures in terms of usability—all thanks to the folks at Sapient,” Modlmayr said.  Partner Name – Sapient GmbH Deutschland Partner URL – www.sapient.com

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Ubuntu 14.04 : Lost my sound randomly tried a few commands and I think I failed

    - by Marc-Antoine Théberge
    I lost my sound the other day and I tried to delete pulseaudio then reinstall, then I tried to delete it and install alsa, It did not work and I had to reinstall everything; overall bad idea... now I can't have any sound. Should I do a fresh install? I don't know how to boot an usb drive with GRUB... Here's my sysinfo System information report, generated by Sysinfo: 2014-05-28 05:45:58 http://sourceforge.net/projects/gsysinfo SYSTEM INFORMATION Running Ubuntu Linux, the Ubuntu 14.04 (trusty) release. GNOME: 3.8.4 (Ubuntu 2014-03-17) Kernel version: 3.13.0-27-generic (#50-Ubuntu SMP Thu May 15 18:08:16 UTC 2014) GCC: 4.8 (i686-linux-gnu) Xorg: 1.15.1 (16 April 2014 01:40:08PM) (16 April 2014 01:40:08PM) Hostname: mark-laptop Uptime: 0 days 11 h 43 min CPU INFORMATION GenuineIntel, Intel(R) Atom(TM) CPU N270 @ 1.60GHz Number of CPUs: 2 CPU clock currently at 1333.000 MHz with 512 KB cache Numbering: family(6) model(28) stepping(2) Bogomips: 3192.13 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 xtpr pdcm movbe lahf_lm dtherm MEMORY INFORMATION Total memory: 2007 MB Total swap: 1953 MB STORAGE INFORMATION SCSI device - scsi0 Vendor: ATA Model: ST9160310AS HARDWARE INFORMATION MOTHERBOARD Host bridge Intel Corporation Mobile 945GSE Express Memory Controller Hub (rev 03) Subsystem: ASUSTeK Computer Inc. Device 8340 PCI bridge(s) Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 2 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation 82801 Mobile PCI Bridge (rev e2) (prog-if 01 [Subtractive decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 1 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 2 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation NM10/ICH7 Family PCI Express Port 4 (rev 02) (prog-if 00 [Normal decode]) Intel Corporation 82801 Mobile PCI Bridge (rev e2) (prog-if 01 [Subtractive decode]) ISA bridge Intel Corporation 82801GBM (ICH7-M) LPC Interface Bridge (rev 02) Subsystem: ASUSTeK Computer Inc. Device 830f IDE interface Intel Corporation 82801GBM/GHM (ICH7-M Family) SATA Controller [IDE mode] (rev 02) (prog-if 80 [Master]) Subsystem: ASUSTeK Computer Inc. Device 830f GRAPHIC CARD VGA controller Intel Corporation Mobile 945GSE Express Integrated Graphics Controller (rev 03) (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 8340 SOUND CARD Multimedia controller Intel Corporation NM10/ICH7 Family High Definition Audio Controller (rev 02) Subsystem: ASUSTeK Computer Inc. Device 831a NETWORK Ethernet controller Qualcomm Atheros AR8121/AR8113/AR8114 Gigabit or Fast Ethernet (rev b0) Subsystem: ASUSTeK Computer Inc. Device 8324

    Read the article

  • Database Security: The First Step in Pre-Emptive Data Leak Prevention

    - by roxana.bradescu
    With WikiLeaks raising awareness around information leaks and the harm they can cause, many organization are taking stock of their own information leak protection (ILP) strategies in 2011. A report by IDC on data leak prevention stated: Increasing database security is one of the most efficient and cost-effective measures an organization can take to prevent data leaks. By utilizing the data protection, access control, account management, encryption, log management, and other security controls inherent in the database management system, entities can institute first-level control over the widest range of protected information. As a central repository for unstructured data, which is growing at leaps and bounds, the database should be the first layer providing information leakage protection. Unfortunately, most organizations are not taking sufficient steps to protect their databases according to a survey of the Independent Oracle User Group. For example, any operating system administrator or database administrator can access the all the data stored in the database in most organizations. Without any kind of auditing or monitoring. And it's not just administrators, database users can typically access the database with ad-hoc query tools from their desktop and by-pass any application level controls. Despite numerous regulations calling for controls to limit the powers of insiders, most organizations still put too many privileges in the hands of their employees. Time and time again these excess privileges have backfired. Internal agents were implicated in almost half of data breaches according to the Verizon Data Breach Investigations Report and the rate is rising. Hackers also took advantage of these excess privileges very successfully using stolen credentials and SQL injection attacks. But back to the insiders. Who are these insiders and why do they do it? In 2002, the U.S. Secret Service (USSS) behavioral psychologists and CERT information security experts formed the Insider Threat Study team to examine insider threat cases that occurred in US critical infrastructure sectors, and examined them from both a technical and a behavioral perspective. A series of fascinating reports has been published as a result of this work. You can learn more by watching the ISSA Insider Threat Web Conference. So as your organization starts to look at data leak prevention over the coming year, start off by protecting your data at the source - your databases. IDC went on to say: Any enterprise looking to improve its competitiveness, regulatory compliance, and overall data security should consider Oracle's offerings, not only because of their database management capabilities but also because they provide tools that are the first layer of information leak prevention. Learn more about Oracle Database Security solutions and get the whitepapers, demos, tutorials, and more that you need to protect data privacy from internal and external threats.

    Read the article

  • How to design a leaderboard?

    - by PeterK
    This sounds like an easy thing but when i considering the following Many players Some have played many games and some just started Different type of statistics ...on what information should the actual ranking be based on. I am planning to display the board in a UITableView so there is limited space available per player. However, I am not bound to the UITableView if there is a better solution. This is a quiz game and the information i am currently capturing per player is: #games played totally #games played per game type (current version have only one game type) #questions answered #correct answers Maybe i should include additional information. I have been thinking about having a leaderboard property page where the player can decide on what basis the leaderboard should display information but would like to avoid the complexity in that. However, if that is needed i will do it. Anyone that can give me some advice on how to design the presentation of this would be highly appreciated?

    Read the article

  • Get Info From Database, or Build Inferred Info?

    - by Zaemz
    Does it make more sense to store and retrieve properties or information directly related to an item in a database, or, say in such a case that a product's ID could describe information about it, should the information be gathered from that? Example: Item SKU -- 4HBU12 4 - is the number of motors H - the voltage B - the color, blue U - the model 12 - the length Should I store those individual attributes as well as the SKU, or should I store only the SKU and build the attributes from it?

    Read the article

  • Upcoming Customer WebCast: Adapters and JCA Transport in Oracle Service Bus 11g

    - by MariaSalzberger
    There is an upcoming webcast planned for September 19th that will show how to implement services using a JCA adapter in Oracle Service Bus 11g. The session will help to utilize existing resources like samples and information centers for adapters in the context of Oracle Service Bus. Topics covered in the webcast are: JCA Transport Overview / Inbound and Outbound scenarios using JCA adapters Implementation of an end-to-end use case using an inbound file adapter and and an outbound database adapter in Oracle Service Bus It will show how to find information on supported adapters in a certain version of OSB 11g Available adapter samples for OSB and SOA How to use SOA adapter samples for Oracle Service Bus A live demo of an adapter sample implementation in Oracle Service Bus Information Centers for adapters and Oracle Service Bus information The presentation recording can by found here after the webcast. Select "Oracle Fusion Middleware" as product. (https://support.oracle.com/rs?type=doc&id=740966.1) The schedule for future webcasts can be found in the above mentioned document as well.

    Read the article

  • Controlling soft errors and false alarms in SSIS

    - by Jim Giercyk
    If you are like me, you dread the 3AM wake-up call.  I would say that the majority of the pages I get are false alarms.  The alerts that require action often require me to open an SSIS package, see where the trouble is and try to identify the offending data.  That can be very time-consuming and can take quite a chunk out of my beauty sleep.  For those reasons, I have developed a simple error handling scenario for SSIS which allows me to rest a little easier.  Let me first say, this is a high level discussion; getting into the nuts and bolts of creating each shape is outside the scope of this document, but if you have an average understanding of SSIS, you should have no problem following along. In the Data Flow above you can see that there is a caution triangle.  For the purpose of this discussion I am creating a truncation error to demonstrate the process, so this is to be expected.  The first thing we need to do is to redirect the error output.  Double-clicking on the Query shape presents us with the properties window for the input.  Simply set the columns that you want to redirect to Redirect Row in the dropdown box and hit Apply. Without going into a dissertation on error handling, I will just note that you can decide which errors you want to redirect on Error and on Truncation.  Therefore, to override this process for a column or condition, simply do not redirect that column or condition. The next thing we want to do is to add some information about the error; specifically, the name of the package which encountered the error and which step in the package wrote the record to the error table.  REMEMBER: If you redirect the error output, your package will not fail, so you will not know where the error record was created without some additional information.    I added 3 columns to my error record; Severity, Package Name and Step Name.  Severity is just a free-form column that you can use to note whether an error is fatal, whether the package is part of a test job and should be ignored, etc.  Package Name and Step Name are system variables. In my package I have created a truncation situation, where the firstname column is 50 characters in the input, but only 4 characters in the output.  Some records will pass without truncation, others will be sent to the error output.  However, the package will not fail. We can see that of the 14 input rows, 8 were redirected to the error table. This information can be used by another step or another scheduled process or triggered to determine whether an error should be sent.  It can also be used as a historical record of the errors that are encountered over time.  There are other system variables that might make more sense in your infrastructure, so try different things.  Date and time seem like something you would want in your output for example.  In summary, we have redirected the error output from an input, added derived columns with information about the errors, and inserted the information and the offending data into an error table.  The error table information can be used by another step or process to determine, based on the error information, what level alert must be sent.  This will eliminate false alarms, and give you a head start when a genuine error occurs.

    Read the article

  • Statistics for and Details About Open Source Swing Projects

    - by user592704
    I'm looking for process-relative information on open-source Swing projects: how the task was described how many developers were involved how much time the solution was taken etc. Are there any open source (online) chronicles in that direction? I strongly prefer projects that include the authors' names. I watched this project and it seems fine but still I couldn't find any information concerning some current project task(s), its developers group, some chronicles (tips, milestones, feedbacks etc) For example if I see this swing component I'd like to know the above information.

    Read the article

  • Oracle ADF Mobile Video Series: End-to-End Mobile Application Development Experience

    - by Michelle Kimihira
    Today's video demonstrates how to create an ADF Mobile application and deploy to a device, all within 10 minutes! We will show you the key aspects of how to quickly and declaratively create an on-device mobile application and get it running on an actual device. Additional Information Product Information on OTN: ADF Mobile Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Feature Usage Reporting in Early Access Programs

    After doing Web development, you can get very used to the luxury of having basic information about your users' machines and browsers. With their permission, you can also get the same information from an application, and can even get more targeted anonymous information that will tell you how the features are used. Kevin explains how this can be used with early access builds to improve the reliability and usability of applications.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >