Search Results

Search found 25783 results on 1032 pages for 'enterprise service bus'.

Page 21/1032 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Lightweight use of Enterprise Library TraceListeners

    - by gWiz
    Is it possible to use the Enterprise Library 4.1 TraceListeners without using the entire Enterprise Library Logging AB? I'd prefer to simply use .NET Diagnostics Tracing, but would like to setup a listener that sends emails on Error events. I figured I could use the Enterprise Library EmailTraceListener. However, my initial attempts to configure it have failed. Here's what I hoped would work: <system.diagnostics> <trace autoflush="false" /> <sources> <source name="SampleSource" switchValue="Verbose" > <listeners> <add name="textFileListener" /> <add name="emailListener" /> </listeners> </source> </sources> <sharedListeners> <add name="textFileListener" type="System.Diagnostics.TextWriterTraceListener" initializeData="..\trace.log" traceOutputOptions="DateTime"> <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose" /> </add> <add name="emailListener" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging" toAddress="[email protected]" fromAddress="[email protected]" smtpServer="mail.example.com" > <filter type="System.Diagnostics.EventTypeFilter" initializeData="Verbose" /> </add> </sharedListeners> </system.diagnostics> However I get [ArgumentException: The parameter 'address' cannot be an empty string. Parameter name: address] System.Net.Mail.MailAddress..ctor(String address, String displayName, Encoding displayNameEncoding) +1098157 System.Net.Mail.MailAddress..ctor(String address) +8 Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.EmailMessage.CreateMailMessage() +256 Microsoft.Practices.EnterpriseLibrary.Logging.Configuration.EmailMessage.Send() +39 Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener.Write(String message) +96 System.Diagnostics.TraceListener.WriteHeader(String source, TraceEventType eventType, Int32 id) +184 System.Diagnostics.TraceListener.TraceEvent(TraceEventCache eventCache, String source, TraceEventType eventType, Int32 id, String format, Object[] args) +63 System.Diagnostics.TraceSource.TraceEvent(TraceEventType eventType, Int32 id, String format, Object[] args) +198 System.Diagnostics.TraceSource.TraceInformation(String message) +14 Which leads me to believe the .NET Tracing code does not care about the "non-standard" config attributes I've supplied for emailListener. I also tried adding the appropriate LAB configSection declaration and: <loggingConfiguration> <listeners> <add toAddress="[email protected]" fromAddress="[email protected]" smtpServer="mail.example.com" type="Microsoft.Practices.EnterpriseLibrary.Logging.TraceListeners.EmailTraceListener, Microsoft.Practices.EnterpriseLibrary.Logging" name="emailListener" /> </listeners> </loggingConfiguration> This also results in the same exception. I figure it's possible to programmatically configure the EmailTraceListener, but I prefer this to be config-driven. I also understand I can implement my own derivative of TraceListener. So, is it possible to use the Ent Lib TraceListeners, without using the whole Ent Lib LAB, and configure them from the config file? Update: After examining the code, I have discovered it is not possible. The Ent Lib TraceListeners do not actually utilize the config attributes they specify in overriding TraceListener.GetSupportedAttributes(), despite the recommendations in the .NET TraceListener documentation. Bug filed.

    Read the article

  • Service reference not generating client types

    - by Cranialsurge
    I am trying to consume a WCF service in a class library by adding a service reference to it. In one of the class libraries it gets consumed properly and I can access the client types in order to generate a proxy off of them. However in my second class library (or even in a console test app), when i add the same service reference, it only exposes the types that are involved in the contract operations and not the client type for me to generate a proxy against. e.g. Endpoint has 2 services exposed - ISvc1 and ISvc2. When I add a service reference to this endpoint in the first class library I get ISvc1Client andf ISvc2Client to generate proxies off of in order to use the operations exposed via those 2 contracts. In addition to these clients the service reference also exposes the types involved in the operations like (type 1, type 2 etc.) this is what I need. However when i try to add a service reference to the same endpoing in another console application or class library only Type 1, Type 2 etc. are exposed and not ISvc1Client and ISvc2Client because of which I cannot generate a proxy to access the operations I need. I am unable to determine why the service reference gets properly generated in one class library but not in the other or the test console app.

    Read the article

  • net.tcp Listener Adapter and net.tcp Port Sharing Service not starting on reboot

    - by Peter K.
    I am using the net.tcp protocol for various web services. When I reboot my Windows 7 Ultimate (64-bit) macbook pro, the service never restarts automatically, even though that is how they are set: The only relevant events I can see are in the System Event Log: Error 6/9/2011 19:47 Service Control Manager 7001 None The Net.Tcp Listener Adapter service depends on the Net.Tcp Port Sharing Service service which failed to start because of the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7000 None The Net.Tcp Port Sharing Service service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7009 None A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect. This post suggests that it's something else blocking the port (in the post it's SCCM 2007 R3 Client which I don't use). What else could be the problem? If it's something else blocking the port, how do I figure out what? When I manually start the services, they start correctly. Dependencies are: Net.Tcp Port Sharing Service Net.Tcp Listener Adapter Still no luck, but I think the problem might be that my network connection takes too long to come up. I put in a custom view of the event log, and found these items: The first in the series says: A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect.

    Read the article

  • Ubuntu doesn't detect drives conected to LSI "Host Bus Adapter"

    - by dvrecmfo
    I purchased LSI SAS 9300-4i Host Bus Adapter, from card bios I can see that it detected all hard drives connected to it, but from ubuntu I can't see them. What I've tried: Installed the driver provided here http://www.lsi.com/products/host-bus-adapters/pages/lsi-sas-9300-4i.aspx#tab/tab3 (I tried both LINUX_RH_SL_OEL_CTX_MPT_GEN3_C0_Phase2.0-3.00.00.00-1 and Linux_Driver_RHEL5-6_SLES10-11_P1) lspci | grep -i lsi 07:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS3004 PCI-Express Fusion-MPT SAS-3 (rev 02)

    Read the article

  • Oracle User Communities and Enterprise Manager

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Contributed by Joe Dimmer, Senior Business Development Manager, Oracle Enterprise Manager Heightened interest and adoption of Oracle Enterprise Manager has led to keen interest in “manageability” within the user group community.  In response, user groups are equipping their membership with the right tools for implementation and use manageability through education opportunities and Special Interest Groups.  Manageability is increasingly viewed not only as a means to enable the Oracle environment to become a competitive business advantage for organizations, but also as a means to advance the individual careers of those who embrace enterprise management.  Two Oracle user groups – the Independent Oracle User Group (IOUG) and the United Kingdom Oracle User Group (UKOUG) – each have Special Interest Groups where manageability is prominently featured.  There are also efforts underway to establish similarly charted SIGs that will be reported in future blogs.  The good news is, there’s a lot of news! First off, the IOUG will be hosting a Summer Series of live webcasts:  “Configuring and Managing a Private Cloud with Enterprise Manager 12c” by Kai Yu of Dell, Inc.              Wednesday, June 20th from Noon – 1 PM CDT , Click here for details & registration “What is User Experience Monitoring and What is Not? A case study of Oracle Global IT’s implementation of Enterprise Manager 12c and RUEI” by Eric Tran Le of Oracle            Wednesday, July 18th from Noon – 1 PM CDT , Click here for details & registration “Shed some light on the ‘bumps in the night’ with Enterprise Manager 12c” by David Start of Johnson Controls            Wednesday, August 22nd from Noon – 1 PM CDT, Click here for details & registration   In addition, the UKOUG Availability and Infrastructure Management (AIM) SIG is hosting its next meeting on Tuesday, July 3rd at the Met in Leeds where EM 12c Cloud Management will be presented.  Click here for details & registration.  In future posts from Joe, look for news related to the following: ·         IOUG Community Page and Newsletter devoted to manageability ·         Full day of manageability featured during Oracle OpenWorld 2012 “SIG Sunday” ·         Happenings from other regional User Groups that feature manageability Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • "SC.EXE config" and dollar sign in service name

    - by Joe Schmoe
    Say I want to make Windows Service startup dependent on SQL Server. In my case service name for SQL Server is MSSQL$SQL11 (SQL11 is SQL Server instance name) However, when I issue this command: SC.EXE config MyService depend= MSSQL$SQL11 everything after the dollar sign is ignored. When I go to "Dependencies" tab in "Services" SQL Server is not listed. When I check matching registry key it becomes clear why: it only has MSSQL. At this point I have to edit registry by hand to change MSSQL to MSSQL$SQL11 and then everything works as expected. Putting quotes around MSSQL$SQL11 doesn't help. Is there a way to specify $ in the middle of the SC.EXE argument string?

    Read the article

  • Storage subsystem borking after server restart (all on a Parallel SCSI bus)

    - by Dat Chu
    I have a server (with a SCSI HBA) connected to two Promise VTrak M310p RAID enclosure on the same bus. Everything is working fine until I have to restart my server. Once restarted, the server can no longer communicate with the enclosures: lots of read errors and bus resets. I have to turn off both enclosure, then turn off the server, then turn on the enclosure, then turn on the server for things to work. I don't believe this is the normal behavior, what could I be missing?

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • how to call wcf service from another wcf service or class library? (5 replies)

    Hi! I??m having problem consuming a WCF Service. To call this Service, I created my own WCF Service with VS2008 own template. Then I added a Service Reference to the WCF Service to consume. So far so good, the service shows up in the Solution Explorer and all methods as well. Then I Created a Class to call the Service from my own WCF Service. And everytime I try to create an object I get the same ...

    Read the article

  • Bridging Two Worlds: Big Data and Enterprise Data

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} The big data world is all the vogue in today’s IT conversations. It’s a world of volume, velocity, variety – tantalizing us with its untapped potential. It’s a world of transformational game-changing technologies that have already begun to alter the information management landscape. One of the reasons that big data is so compelling is that it’s a universal challenge that impacts every one of us. Whether it is healthcare, financial, manufacturing, government, retail - big data presents a pressing problem for many industries: how can so much information be processed so quickly to deliver the ‘bigger’ picture? With big data we’re tapping into new information that didn’t exist before: social data, weblogs, sensor data, complex content, and more. What also makes big data revolutionary is that it turns traditional information architecture on its head, putting into question commonly accepted notions of where and how data should be aggregated processed, analyzed, and stored. This is where Hadoop and NoSQL come in – new technologies which solve new problems for managing unstructured data. And now for some worst practices that I'd recommend that you please not follow: Worst Practice Lesson 1: Throw away everything that you already know about data management, data integration tools, and start completely over. One shouldn’t forget what’s already running in today’s IT. Today’s Business Analytics, Data Warehouses, Business Applications (ERP, CRM, SCM, HCM), and even many social, mobile, cloud applications still rely almost exclusively on structured data – or what we’d like to call enterprise data. This dilemma is what today’s IT leaders are up against: what are the best ways to bridge enterprise data with big data? And what are the best strategies for dealing with the complexities of these two unique worlds? Worst Practice Lesson 2: Throw away all of your existing business applications … because they don’t run on big data yet. Bridging the two worlds of big data and enterprise data means considering solutions that are complete, based on emerging Hadoop technologies (as well as traditional), and are poised for success through integrated design tools, integrated platforms that connect to your existing business applications, as well as and support real-time analytics. Leveraging these types of best practices translates to improved productivity, lowered TCO, IT optimization, and better business insights. Worst Practice Lesson 3: Separate out [and keep separate] your big data sandboxes from all the current enterprise IT systems. Don’t mix sand among playgrounds. We didn't tell you that you wouldn't get dirty doing this. Correlation between the two worlds is key. The real advantage to analyzing big data comes when you can correlate it with the existing data in your data warehouse or your current applications to make sense of the larger patterns. If you have not followed these worst practices 1-3 then you qualify for the first step of our journey: bridging the two worlds of enterprise data and big data. Over the next several weeks we’ll be discussing this topic along with several others around big data as it relates to data integration. We welcome you to join us in the conversation by following us on twitter on #BridgingBigData or download our latest white paper and resource kit: Big Data and Enterprise Data: Bridging Two Worlds.

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) Now Available!

    - by Javier Puerta
    Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is now available on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011 and the first ever Enterprise Manager release available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board.   New Capabilities and Features   Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins:   Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality.   Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade:   All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation:   Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2.   All updated Oracle Enterprise Manager documentation can be found on OTN   Customer Webcast - EM 12c Installation and Upgrade: This webcast is for customers who are interested in learning how to successfully deploy or upgrade to EM 12.1.0.2.   Customer Webcast - Installation and Upgrade - September 21(registration and info on OTN starting September 12)   Enterprise Manager 12c R2 Resources:   OTN Download Page Upgrade Guide

    Read the article

  • Oracle Enterprise Manager 12c Configuration Best Practices (Part 3 of 3)

    - by Bethany Lapaglia
    <span id="XinhaEditingPostion"></span>&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;span id=&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;XinhaEditingPostion&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;quot;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/span&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; This is part 3 of a three-part blog series that summarizes the most commonly implemented configuration changes to improve performance and operation of a large Enterprise Manager 12c environment. A “large” environment is categorized by the number of agents, targets and users. See the Oracle Enterprise Manager Cloud Control Advanced Installation and Configuration Guide chapter on Sizing for more details on sizing your environment properly. Part 1 of this series covered recommended configuration changes for the OMS and Repository Part 2 covered recommended changes for the Weblogic server Part 3 covers general configuration recommendations and a few known issues The entire series can be found in the My Oracle Support note titled Oracle Enterprise Manager 12c Configuration Best Practices [1553342.1]. Configuration Recommendations Configure E-Mail Notifications for EM related Alerts In some environments, the notifications for events for different target types may be sent to different support teams (i.e. notifications on host targets may be sent to a platform support team). However, the EM application administrators should be well informed of any alerts or problems seen on the EM infrastructure components. Recommendation: Create a new Incident rule for monitoring all EM components and setup the notifications to be sent to the EM administrator(s). The notification methods available can create or update an incident, send an email or forward to an event connector. To setup the incident rule set follow the steps below. Note that each individual rule in the rule set can have different actions configured. 1.  To create an incident rule for monitoring the EM components, click on Setup / Incidents / Incident Rules. On the All Enterprise Rules page, click on the out-of-box rule called “Incident management Ruleset for all targets” and then click on the Actions drop down list and select “Create Like Rule Set…” 2. For the rule set name, enter a name such as MTM Ruleset. Under the Targets tab, select “All targets of types” and select “OMS and Repository” from the drop down list. This target type contains all of the key EM components (OMS servers, repository, domains, etc.) 3. Click on the Rules tab. To edit a rule, click on the rule name and click on Edit as seen below 4. Modify the following rules: a. Incident creation Rule for metric alerts i. Leave the Type set as is but change the Severity to add Warning by clicking on the drop down list and selecting “Warning”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications). Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. b. Incident creation Rule for target unreachable. i.   Leave the Type set as is but change the Target type to add OMS and Repository by clicking on the drop down list selecting “OMS and Repository”. Click Next. ii.  Add or modify the actions as required (i.e. add email notifications) Click Continue and then click Next. iii. Leave the Name and description the same and click Next. iv. Click Continue on the Review page. 5.  Modify the actions for any other rule as required and be sure to click the “Save” push button to save the rule set or all changes will be lost. Configure Out-of-Band Notifications for EM Agent Out-of-Band notifications act as a backup when there’s a complete EM outage or a repository database issue. This is configured on the agent of the OMS server and can be used to send emails or execute another script that would create a trouble ticket. It will send notifications about the following issues: • Repository Database down • All OMS are down • Repository side collection job that is broken or has an invalid schedule • Notification job that is broken or has an invalid schedule Recommendation: To setup Out-of-Band Notifications, refer to the MOS note “How To Setup Out Of Bound Email Notification In 12c” (Doc ID 1472854.1) Modify the Performance Test for the EM Console Service The EM Console Service has an out-of-box defined performance test that will be run to determine the status of this service. The test issues a request via an HTTP method to a specific URL. By default, the HTTP method used for this test is a GET but for performance reasons, should be changed to HEAD. The URL used for this request is set to point to a specific OMS server by default. If a multi-OMS system has been implemented and the OMS servers are behind a load balancer, then the URL in this section must be modified to point to the load balancer name instead of a specific server name. If this is not done and a portion of the infrastructure is down then the EM Console Service will show down as this test will fail. Recommendation: Modify the HTTP Method for the EM Console Service test and the URL if required following the detailed steps below. 1.  To create an incident rule for monitoring the EM components, click on Targets / Services. From the list of services, click on the EM Console Service. 2. On the EM Console Service page, click on the Test Performance tab. 3.  At the bottom of the page, click on the Web Transaction test called EM Console Service Test 4.  Click on the Service Tests and Beacons breadcrumb near the top of the page. 5.  Under the Service Tests section, make sure the EM Console Service Test is selected and click on the Edit push button. 6.  Under the Transaction section, make sure the Access Logout page transaction is selected and click on the Edit push button 7) Under the Request section, change the HTTP Method from the default of GET to the recommended value of HEAD. The URL in this section must be modified to point to the load balancer name instead of a specific server name if multi-OMSes have been implemented. Check for Known Issues Job Purge Repository Job is Shown as Down This issue is caused after upgrading EM from 12c to 12cR2. On the Repository page under Setup ? Manage Cloud Control ? Repository, the job called “Job Purge” is shown as down and the Next Scheduled Run is blank. Also, repvfy reports that this is a missing DBMS_SCHEDULER job. Recommendation: In EM 12cR2, the apply_purge_policies have been moved from the MGMT_JOB_ENGINE package to the EM_JOB_PURGE package. To remove this error, execute the commands below: $ repvfy verify core -test 2 -fix To confirm that the issue resolved, execute $ repvfy verify core -test 2 It can also be verified by refreshing the Job Service page in EM and check the status of the job, it should now be Up. Configure the Listener Targets in EM with the Listener Password (where required) EM will report this error every time it is encountered in the listener log file. In a RAC environment, typically the grid home and rdbms homes are owned by different OS users. The listener always runs from the grid home. Only the listener process owner can query or change the listener properties. The listener uses a password to allow other OS users (ex. the agent user) to query the listener process for parameters. EM has a default listener target metric that will query these properties. If the agent is not permitted to do this, the TNS incident (TNS-1190) will be logged in the listener’s log file. This means that the listener targets in EM also need to have this password set. Not doing so will cause many TNS incidents (TNS-1190). Below is a sample of this error from the listener log file: Recommendation: Set a listener password and include it in the configuration of the listener targets in EM For steps on setting the listener passwords, see MOS notes: 260986.1 , 427422.1

    Read the article

  • how to call wcf service from another wcf service or class library? (5 replies)

    Hi! I??m having problem consuming a WCF Service. To call this Service, I created my own WCF Service with VS2008 own template. Then I added a Service Reference to the WCF Service to consume. So far so good, the service shows up in the Solution Explorer and all methods as well. Then I Created a Class to call the Service from my own WCF Service. And everytime I try to create an object I get the same ...

    Read the article

  • Nagios3 gives a warning on HTTP service monitoring

    - by Dez
    Already set up my local net configuration to be monitored by Nagios3. I found a problem that Nagios3 reports a warning in the HTTP monitoring service of a Debian server set at ip 192.168.1.52, that has an individual virtual host and a mass virtual host for application development. I get this status message: HTTP WARNING: HTTP/1.1 404 Not Found I used the Nagios tools to check. servername is the name of the vhost server name I used in the Apache configuration. /usr/lib/nagios/plugins/check_http -H servername -I 192.168.1.52 receiving this status message: HTTP OK HTTP/1.1 200 OK - 37900 bytes in 0.504 seconds |time=0.503946s;;;0.000000 size=37900B;;;0 But when I check like this: /usr/lib/nagios/plugins/check_http -I 192.168.1.52 I get the same status message as the warning, so I assume that I don't have Nagios completely well set up because doesn't recognize the vhosts for that server, how it should be as the check_http service shows. Where should I look to fix that warning?

    Read the article

  • Missing NFS service link?

    - by Recc
    # ps ax | grep nfs 1108 ?        S<     0:00 [nfsd4] 1109 ?        S<     0:00 [nfsd4_callbacks] 1110 ?        S      0:00 [nfsd] 1111 ?        S      0:00 [nfsd] 1112 ?        S      0:00 [nfsd] 1113 ?        S      0:00 [nfsd] 1114 ?        S      0:00 [nfsd] 1115 ?        S      0:00 [nfsd] 1116 ?        S      0:00 [nfsd] 1117 ?        S      0:00 [nfsd] 4437 ?        S<     0:00 [nfsiod] 16799 ?        S      0:00 [nfsv4.0-svc] 18091 pts/1    S+     0:00 grep nfs But # service nfs status nfs: unrecognized service That'll be on Ubuntu 11.04 am I missing a sym link or something? How can I fix this quickly?

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • Limit on WMIC requests from a Windows Service

    - by Anders
    Hi all, Does anyone know if there is limit on how many wmic requests Windows can handle simultaneously if they are originating from a Windows service? The reason I'm asking is because my application fails when too many simultaneous requests have been initiated. I don't get any data back from the application. However, If I compile the Python application and run it as a stand alone application all will work fine. The wmic calls are looking like this: subprocess.Popen("wmic path Win32_PerfFormattedData_PerfOS_Memory get CommittedBytes", stdout=subprocess.PIPE, stderr=subprocess.PIPE) This makes me wonder, is there a limit Windows Services and what they can perform? I mean, if the .exe file can handle all requests, then it must be something to do with the fact that I have compiled it as a Windows service.

    Read the article

  • Multiple Denial of Service vulnerabilities in Quagga

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-4826 Denial of Service (DoS) vulnerability 3.5 Quagga Solaris 10 SPARC: 126206-09 X86: 126207-09 Solaris 11 11/11 SRU 4 CVE-2009-1572 Denial of Service (DoS) vulnerability 5.0 CVE-2010-1674 Denial of Service (DoS) vulnerability 5.0 CVE-2010-1675 Denial of Service (DoS) vulnerability 5.0 CVE-2010-2948 Denial of Service (DoS) vulnerability 6.5 CVE-2010-2949 Denial of Service (DoS) vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • The Social Enterprise: Gangnam Style

    - by Mike Stiles
    Are only small and medium businesses able to put social strategies in place, generate consistent, compelling content for customers, and be nimble enough to listen and respond to the social communities they build? Or are enterprise organizations eagerly and effectively adopting social as well? It depends on whom inside the organization you ask. A study from Attensity looked at who “gets” social inside enterprise organizations. The results were unsurprising. Mostly, Generation X and Y employees who came of age with social as part of their lives and as a key communications vehicle understand it. Imagine being a 25-year-old at a company that bans employees from accessing Facebook at work. You may as well tell them they can’t use phones and must do all calculations on an abacus. To them, such policy is absent of real-world logic and signals to them the organization is destined to be the victim of an up-and-comer. After that, it’s senior management that gets social. You don’t get to be in senior management without reading a few things and paying attention. Most senior managers are well aware of the impact social has had and will have, though they may be unsure of what to do about it. The better ones will utilize those on the inside who do inherently know how to communicate and build virtual relationships using social. The very best will get the past out of the way for these social innovators, so the new communications can be enacted minus counterproductive dictums, double-clutching, meeting-creep, and all the other fading internal practices that water down content and impede change. Organizationally, the Attensity study found 81% of enterprise companies believe failing to embrace social will result in their being left behind. Yet our old friend fear still has many captive in its clutches. 79% feel overwhelmed by the volume of social data available, something a social technology partner with goal-oriented analytics expertise could go a long way toward alleviating. Then there’s the fear of social having a negative impact. This comes from a lack of belief in the product, the customer service, or both. The public uses social not to go out and slay brands. They’re using it to be honest. If the fear is that honesty will reflect badly on the brand, the brand has much bigger, broader problems than what happens on Facebook. Sadly, most enterprise organizations still see social as a megaphone, a one-way channel with which to hit people with ads. They either don’t understand social relationships, or don’t want any. The truly unenlightened manager will always say, “We help them by selling them our stuff.” “Brand affinity” is a term, it’s just not one assigned much value in enterprise organizations. Which brings us to Psy, the Korean performer whose Internet video phenom “Gangnam Style,” as of this writing, has been viewed 438,550,238 times on YouTube. It’s bigger than anything a brand will probably ever publish. Most brands would never have seen the point of making or publishing it. But a funny thing happened on the way to Internet success. The video literally doubled the stock price of Psy’s father’s software firm. NH Investment and Securities said, "The positive sentiment has attracted investors just because of the fact the company is owned by Psy's father and uncle.” The company wasn’t mentioned or seen in the video in any way, yet reaped tangible rewards just for being tangentially associated with it. Imagine your brand being visibly and directly responsible for such a smash and tell me it’s worthless. When enterprise organizations embrace the value of igniting passions, making people happier, solving their problems, informing them, helping them have fun, etc., then they will have fully embraced social, and will reap the brand affinity rewards of heightened awareness, brand loyalty and yes, sales.

    Read the article

  • E-Bus ATG Advisor Webcast program - June Edition

    - by cwarticki
    E-Business Suite Applications Technology Group (ATG) Advisor Webcast Program – June 2014 Thursday June 12, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST EBS Patch Wizard Overview ·      What is Oracle EBS Patch Wizard Tool? ·      Benefits of the Patch Wizard utility ·      Patch Wizard Interface ·      Patch Impact Analysis Details & Registration : Note 1672371.1 Direct registration link Thursday June 26, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST EBS Proactive Analyzers Overview ·         What are Oracle Support Analyzers? ·         How to identify the Analyzers available? ·         Short Analyzers Overview ·         Patch Impact Analysis ·         How to use an Analyzer? Details & Registration : Note 1672363.1 Direct registration link If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • Difference between two ways of installing tomcat as a service (Linux)

    - by varesa
    I am installing tomcat on a linux server, and would want it to be available as a service. I have found two different ways to achieve this. The first one is to copy the daemon.sh from $CATALINA_HOME/bin to /etc/init.d, and the other one I have seen is to create a simple init script that class $CATALINA_HOME/bin/startup.sh, etc. Startup.sh calls catalina.sh. The contents of the daemon.sh and startup.sh look very similar (at least for the env variables, and stuff like that). Daemon.sh calls jsvc in the end. Catalina.sh calls java. What is the (practical) difference between using the two of these when setting up tomcat as a service?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >