Search Results

Search found 61242 results on 2450 pages for 'service name'.

Page 14/2450 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Add Service Reference is generating Message Contracts

    - by JohnIdol
    OK, this has been haunting me for a while, can't find much on Google and I am starting to lose hope so I am reverting to the SO community. When I import a given service using "Add service Reference" on Visual Studio 2008 (SP1) all the Request/Response messages are being unnecessarily wrapped into Message Contracts (named as -- "operationName" + "Request"/"Response" + "1" at the end). The code generator says: // CODEGEN: Generating message contract since the operation XXX is neither RPC nor document wrapped. The guys who are generating the wsdl from a Java service say they are specifying DOCUMENT-LITERAL/WRAPPED. Any help/pointer/clue would be highly appreciated. Update: this is a sample of my wsdl for one of the operations that look suspicious. Note the mismatch on the message element attribute for the request, compared to the response. <!- imports namespaces and defines elements --> <wsdl:types> <xsd:schema targetNamespace="http://WHATEVER/" xmlns:xsd_1="http://WHATEVER_1/" xmlns:xsd_2="http://WHATEVER_2/"> <xsd:import namespace="http://WHATEVER_1/" schemaLocation="WHATEVER_1.xsd"/> <xsd:import namespace="http://WHATEVER_2/" schemaLocation="WHATEVER_2.xsd"/> <xsd:element name="myOperationResponse" type="xsd_1:MyOperationResponse"/> <xsd:element name="myOperation" type="xsd_1:MyOperationRequest"/> </xsd:schema> </wsdl:types> <!- declares messages - NOTE the mismatch on the request element attribute compared to response --> <wsdl:message name="myOperationRequest"> <wsdl:part element="tns:myOperation" name="request"/> </wsdl:message> <wsdl:message name="myOperationResponse"> <wsdl:part element="tns:myOperationResponse" name="response"/> </wsdl:message> <!- operations --> <wsdl:portType name="MyService"> <wsdl:operation name="myOperation"> <wsdl:input message="tns:myOperationRequest"/> <wsdl:output message="tns:myOperationResponse"/> <wsdl:fault message="tns:myOperationFault" name="myOperationFault"/> <wsdl:fault message="tns:myOperationFault1" name="myOperationFault1"/> </wsdl:operation> </wsdl:portType> Update 2: I pulled all the types that I had in my imported namespace (they were in a separate xsd) into the wsdl, as I suspected the import could be triggering the message contract generation. To my surprise it was not the case and having all the types defined in the wsdl did not change anything. I then (out of desperation) started constructing wsdls from scratch and playing with the maxOccurs attributes of element attributes contained in a sequence attribute I was able to reproduce the undesired message contract generation behavior. Here's a sample of an element: <xsd:element name="myElement"> <xsd:complexType> <xsd:sequence> <xsd:element minOccurs="0" maxOccurs="1" name="arg1" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:element> Playing with maxOccurs on elements that are used as messages (all requests and responses basically) the following happens: maxOccurs = "1" does not trigger the wrapping macOcccurs 1 triggers the wrapping maxOccurs = "unbounded" triggers the wrapping I was not able to reproduce this on my production wsdl yet because the nesting of the types goes very deep, and it's gonna take me time to inspect it thoroughly. In the meanwhile I am hoping it might ring a bell - any help highly appreciated.

    Read the article

  • net.tcp Listener Adapter and net.tcp Port Sharing Service not starting on reboot

    - by Peter K.
    I am using the net.tcp protocol for various web services. When I reboot my Windows 7 Ultimate (64-bit) macbook pro, the service never restarts automatically, even though that is how they are set: The only relevant events I can see are in the System Event Log: Error 6/9/2011 19:47 Service Control Manager 7001 None The Net.Tcp Listener Adapter service depends on the Net.Tcp Port Sharing Service service which failed to start because of the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7000 None The Net.Tcp Port Sharing Service service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7009 None A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect. This post suggests that it's something else blocking the port (in the post it's SCCM 2007 R3 Client which I don't use). What else could be the problem? If it's something else blocking the port, how do I figure out what? When I manually start the services, they start correctly. Dependencies are: Net.Tcp Port Sharing Service Net.Tcp Listener Adapter Still no luck, but I think the problem might be that my network connection takes too long to come up. I put in a custom view of the event log, and found these items: The first in the series says: A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect.

    Read the article

  • LLBLGen Pro feature highlights: automatic element name construction

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) One of the things one might take for granted but which has a huge impact on the time spent in an entity modeling environment is the way the system creates names for elements out of the information provided, in short: automatic element name construction. Element names are created in both directions of modeling: database first and model first and the more names the system can create for you without you having to rename them, the better. LLBLGen Pro has a rich, fine grained system for creating element names out of the meta-data available, which I'll describe more in detail below. First the model element related element naming features are highlighted, in the section Automatic model element naming features and after that I'll go more into detail about the relational model element naming features LLBLGen Pro has to offer in the section Automatic relational model element naming features. Automatic model element naming features When working database first, the element names in the model, e.g. entity names, entity field names and so on, are in general determined from the relational model element (e.g. table, table field) they're mapped on, as the model elements are reverse engineered from these relational model elements. It doesn't take rocket science to automatically name an entity Customer if the entity was created after reverse engineering a table named Customer. It gets a little trickier when the entity which was created by reverse engineering a table called TBL_ORDER_LINES has to be named 'OrderLine' automatically. Automatic model element naming also takes into effect with model first development, where some settings are used to provide you with a default name, e.g. in the case of navigator name creation when you create a new relationship. The features below are available to you in the Project Settings. Open Project Settings on a loaded project and navigate to Conventions -> Element Name Construction. Strippers! The above example 'TBL_ORDER_LINES' shows that some parts of the table name might not be needed for name creation, in this case the 'TBL_' prefix. Some 'brilliant' DBAs even add suffixes to table names, fragments you might not want to appear in the entity names. LLBLGen Pro offers you to define both prefix and suffix fragments to strip off of table, view, stored procedure, parameter, table field and view field names. In the example above, the fragment 'TBL_' is a good candidate for such a strip pattern. You can specify more than one pattern for e.g. the table prefix strip pattern, so even a really messy schema can still be used to produce clean names. Underscores Be Gone Another thing you might get rid of are underscores. After all, most naming schemes for entities and their classes use PasCal casing rules and don't allow for underscores to appear. LLBLGen Pro can automatically strip out underscores for you. It's an optional feature, so if you like the underscores, you're not forced to see them go: LLBLGen Pro will leave them alone when ordered to to so. PasCal everywhere... or not, your call LLBLGen Pro can automatically PasCal case names on word breaks. It determines word breaks in a couple of ways: a space marks a word break, an underscore marks a word break and a case difference marks a word break. It will remove spaces in all cases, and based on the underscore removal setting, keep or remove the underscores, and upper-case the first character of a word break fragment, and lower case the rest. Say, we keep the defaults, which is remove underscores and PasCal case always and strip the TBL_ fragment, we get with our example TBL_ORDER_LINES, after stripping TBL_ from the table name two word fragments: ORDER and LINES. The underscores are removed, the first character of each fragment is upper-cased, the rest lower-cased, so this results in OrderLines. Almost there! Pluralization and Singularization In general entity names are singular, like Customer or OrderLine so LLBLGen Pro offers a way to singularize the names. This will convert OrderLines, the result we got after the PasCal casing functionality, into OrderLine, exactly what we're after. Show me the patterns! There are other situations in which you want more flexibility. Say, you have an entity Customer and an entity Order and there's a foreign key constraint defined from the target of Order and the target of Customer. This foreign key constraint results in a 1:n relationship between the entities Customer and Order. A relationship has navigators mapped onto the relationship in both entities the relationship is between. For this particular relationship we'd like to have Customer as navigator in Order and Orders as navigator in Customer, so the relationship becomes Customer.Orders 1:n Order.Customer. To control the naming of these navigators for the various relationship types, LLBLGen Pro defines a set of patterns which allow you, using macros, to define how the auto-created navigator names will look like. For example, if you rather have Customer.OrderCollection, you can do so, by changing the pattern from {$EndEntityName$P} to {$EndEntityName}Collection. The $P directive makes sure the name is pluralized, which is not what you want if you're going for <EntityName>Collection, hence it's removed. When working model first, it's a given you'll create foreign key fields along the way when you define relationships. For example, you've defined two entities: Customer and Order, and they have their fields setup properly. Now you want to define a relationship between them. This will automatically create a foreign key field in the Order entity, which reflects the value of the PK field in Customer. (No worries if you hate the foreign key fields in your classes, on NHibernate and EF these can be hidden in the generated code if you want to). A specific pattern is available for you to direct LLBLGen Pro how to name this foreign key field. For example, if all your entities have Id as PK field, you might want to have a different name than Id as foreign key field. In our Customer - Order example, you might want to have CustomerId instead as foreign key name in Order. The pattern for foreign key fields gives you that freedom. Abbreviations... make sense of OrdNr and friends I already described word breaks in the PasCal casing paragraph, how they're used for the PasCal casing in the constructed name. Word breaks are used for another neat feature LLBLGen Pro has to offer: abbreviation support. Burt, your friendly DBA in the dungeons below the office has a hate-hate relationship with his keyboard: he can't stand it: typing is something he avoids like the plague. This has resulted in tables and fields which have names which are very short, but also very unreadable. Example: our TBL_ORDER_LINES example has a lovely field called ORD_NR. What you would like to see in your fancy new OrderLine entity mapped onto this table is a field called OrderNumber, not a field called OrdNr. What you also like is to not have to rename that field manually. There are better things to do with your time, after all. LLBLGen Pro has you covered. All it takes is to define some abbreviation - full word pairs and during reverse engineering model elements from tables/views, LLBLGen Pro will take care of the rest. For the ORD_NR field, you need two values: ORD as abbreviation and Order as full word, and NR as abbreviation and Number as full word. LLBLGen Pro will now convert every word fragment found with the word breaks which matches an abbreviation to the given full word. They're case sensitive and can be found in the Project Settings: Navigate to Conventions -> Element Name Construction -> Abbreviations. Automatic relational model element naming features Not everyone works database first: it may very well be the case you start from scratch, or have to add additional tables to an existing database. For these situations, it's key you have the flexibility that you can control the created table names and table fields without any work: let the designer create these names based on the entity model you defined and a set of rules. LLBLGen Pro offers several features in this area, which are described in more detail below. These features are found in Project Settings: navigate to Conventions -> Model First Development. Underscores, welcome back! Not every database is case insensitive, and not every organization requires PasCal cased table/field names, some demand all lower or all uppercase names with underscores at word breaks. Say you create an entity model with an entity called OrderLine. You work with Oracle and your organization requires underscores at word breaks: a table created from OrderLine should be called ORDER_LINE. LLBLGen Pro allows you to do that: with a simple checkbox you can order LLBLGen Pro to insert an underscore at each word break for the type of database you're working with: case sensitive or case insensitive. Checking the checkbox Insert underscore at word break case insensitive dbs will let LLBLGen Pro create a table from the entity called Order_Line. Half-way there, as there are still lower case characters there and you need all caps. No worries, see below Casing directives so everyone can sleep well at night For case sensitive databases and case insensitive databases there is one setting for each of them which controls the casing of the name created from a model element (e.g. a table created from an entity definition using the auto-mapping feature). The settings can have the following values: AsProjectElement, AllUpperCase or AllLowerCase. AsProjectElement is the default, and it keeps the casing as-is. In our example, we need to get all upper case characters, so we select AllUpperCase for the setting for case sensitive databases. This will produce the name ORDER_LINE. Sequence naming after a pattern Some databases support sequences, and using model-first development it's key to have sequences, when needed, to be created automatically and if possible using a name which shows where they're used. Say you have an entity Order and you want to have the PK values be created by the database using a sequence. The database you're using supports sequences (e.g. Oracle) and as you want all numeric PK fields to be sequenced, you have enabled this by the setting Auto assign sequences to integer pks. When you're using LLBLGen Pro's auto-map feature, to create new tables and constraints from the model, it will create a new table, ORDER, based on your settings I previously discussed above, with a PK field ID and it also creates a sequence, SEQ_ORDER, which is auto-assigns to the ID field mapping. The name of the sequence is created by using a pattern, defined in the Model First Development setting Sequence pattern, which uses plain text and macros like with the other patterns previously discussed. Grouping and schemas When you start from scratch, and you're working model first, the tables created by LLBLGen Pro will be in a catalog and / or schema created by LLBLGen Pro as well. If you use LLBLGen Pro's grouping feature, which allows you to group entities and other model elements into groups in the project (described in a future blog post), you might want to have that group name reflected in the schema name the targets of the model elements are in. Say you have a model with a group CRM and a group HRM, both with entities unique for these groups, e.g. Employee in HRM, Customer in CRM. When auto-mapping this model to create tables, you might want to have the table created for Employee in the HRM schema but the table created for Customer in the CRM schema. LLBLGen Pro will do just that when you check the setting Set schema name after group name to true (default). This gives you total control over where what is placed in the database from your model. But I want plural table names... and TBL_ prefixes! For now we follow best practices which suggest singular table names and no prefixes/suffixes for names. Of course that won't keep everyone happy, so we're looking into making it possible to have that in a future version. Conclusion LLBLGen Pro offers a variety of options to let the modeling system do as much work for you as possible. Hopefully you enjoyed this little highlight post and that it has given you new insights in the smaller features available to you in LLBLGen Pro, ones you might not have thought off in the first place. Enjoy!

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • Deleting windows.edb and unchecking Indexing service lead to hard drive file records swapping

    - by linni
    I followed the instructions listed here:http://www.mydigitallife.info/2007/09/18/turn-off-and-disable-search-indexing-service-in-windows-xp/ to free up space on hard drive by deleting the windows.edb indexing file... I also stopped windows search service as mentioned in the comments following the article. In addition to unchecking the "Allow Indexing Service to index this disk for fast file searching" check box on the properties dialog for the C:\ drive, I did the same for two usb connected hard drives (J:\ and I:\ ). I'm not sure why I did that, thought it might shrink the windows.edb file so I wouldn't have to delete it (which sounded a bit risky in my ears at the time). The file of course didn't shrink so I ended up deleting it and freeing up over 3 GB of space, yeehaw. However, as soon as I had done this I could not access the usb connected hard drives anymore. The error I got was "I:\photos is not accessible" "The file or directory is corrupted and unreadable" when I tried to open the photos directory on I:\ Here is where I enter the twilight zone... I try disconnecting I:\ usb hard drive. But XP shows me that instead J:\ drive has disconnected and I:\ is still there. So I disconnect both drives and restart the computer. I then connect one drive, but it lists up the contents of the other drive on root level. I tried connecting the drives vice versa and the same thing happens. I try taking one of the hard drives to another computer and when I connect it there it lists up not its own contents but the contents of the other hard drive and gives the same error as above when I try and access any of the folders (even folders on the root that have the same name as folders on the other drive (e.g. J:\photos and I:\photos)??? And no, this is not a me mixing up my drive letters. Computer Manager - Disk management shows the same result as explorer: The drive size is correct (one is 500GB, the other is 640GB) but the drive name is of the opposite drive, as long as the contents. Also, one drive was full of data and the other almost empty but they incorrectly show their free space status of the other drive. Somehow the usb drives seem to have switched file tables, file records, boot records or something, extremely weird! Even weirder, if I try and create a text file or folder on this drive, it works fine, accessing them, saving, whatever, all good, but accessing any other data on the drive gives me an error. Does anyone have a clue what is going on and more importantly, how I can restore the correct folder listings to access my family photos ??? cheers, linni

    Read the article

  • how to call wcf service from another wcf service or class library? (5 replies)

    Hi! I??m having problem consuming a WCF Service. To call this Service, I created my own WCF Service with VS2008 own template. Then I added a Service Reference to the WCF Service to consume. So far so good, the service shows up in the Solution Explorer and all methods as well. Then I Created a Class to call the Service from my own WCF Service. And everytime I try to create an object I get the same ...

    Read the article

  • how to call wcf service from another wcf service or class library? (5 replies)

    Hi! I??m having problem consuming a WCF Service. To call this Service, I created my own WCF Service with VS2008 own template. Then I added a Service Reference to the WCF Service to consume. So far so good, the service shows up in the Solution Explorer and all methods as well. Then I Created a Class to call the Service from my own WCF Service. And everytime I try to create an object I get the same ...

    Read the article

  • Nagios3 gives a warning on HTTP service monitoring

    - by Dez
    Already set up my local net configuration to be monitored by Nagios3. I found a problem that Nagios3 reports a warning in the HTTP monitoring service of a Debian server set at ip 192.168.1.52, that has an individual virtual host and a mass virtual host for application development. I get this status message: HTTP WARNING: HTTP/1.1 404 Not Found I used the Nagios tools to check. servername is the name of the vhost server name I used in the Apache configuration. /usr/lib/nagios/plugins/check_http -H servername -I 192.168.1.52 receiving this status message: HTTP OK HTTP/1.1 200 OK - 37900 bytes in 0.504 seconds |time=0.503946s;;;0.000000 size=37900B;;;0 But when I check like this: /usr/lib/nagios/plugins/check_http -I 192.168.1.52 I get the same status message as the warning, so I assume that I don't have Nagios completely well set up because doesn't recognize the vhosts for that server, how it should be as the check_http service shows. Where should I look to fix that warning?

    Read the article

  • Missing NFS service link?

    - by Recc
    # ps ax | grep nfs 1108 ?        S<     0:00 [nfsd4] 1109 ?        S<     0:00 [nfsd4_callbacks] 1110 ?        S      0:00 [nfsd] 1111 ?        S      0:00 [nfsd] 1112 ?        S      0:00 [nfsd] 1113 ?        S      0:00 [nfsd] 1114 ?        S      0:00 [nfsd] 1115 ?        S      0:00 [nfsd] 1116 ?        S      0:00 [nfsd] 1117 ?        S      0:00 [nfsd] 4437 ?        S<     0:00 [nfsiod] 16799 ?        S      0:00 [nfsv4.0-svc] 18091 pts/1    S+     0:00 grep nfs But # service nfs status nfs: unrecognized service That'll be on Ubuntu 11.04 am I missing a sym link or something? How can I fix this quickly?

    Read the article

  • SharePoint 2010 Hosting :: Sending SMS Alerts in SharePoint 2010 Over Office Mobile Service Protocol (OMS)

    - by mbridge
    In this post, I want to share the exciting news of SharePoint's 2010 new feature. Finally it's possible to send SMS directly from SharePoint to mobile phones. The advantages of sending SMS instead of Email messages are obvious: SMS alerts or reminders that are received on mobile phones are more preferred than Email messages that can be lost in the mass of spam. The interface is standard as it's very similar to previous versions of the product. Adjustments are easy to do, simply enter the address of the Office Mobile Service (OMS) web-service which you want to use for sending messages, then specify the connection parameters. Further details on Office Mobile Service is available below. The Test Service button checks if OMS web-service is accessible using provided URL (user name and password are not verified). This check is needed because OMS web-service URL depends on the mobile operator and country. It's now possible to select the method of sending alerts in alerts settings. Email option is selected by default. Alerts delivery method is displayed in the list of existing alerts. Office Mobile Service (OMS) SharePoint 2010 uses exterior servers similar to SMTP servers for sending SMS alerts. However, Microsoft started development and promotion of their own protocol instead of using existing ones. That is how Office Mobile Service (OMS) appeared. This open protocol enables clients to send text and multimedia messages (mobile messages) remotely to the server which processes these messages and delivers them to mobile phones.  Typical scenario of utilizing this protocol is data transfer between computer application and mobile phone. The recipient can answer messages and the server in return will deliver the answer by SMTP protocol, i.e. by email.  Key quality of this protocol is that it's built on base of HTPP(S) and SOAP protocols.     This means that in fact SMS gateway must support typified web-service. What do you get from web-service? What you get is the ability to send SMS from any platform you want.  The protocol is being developed at the moment and version 0.2 from 08/28/2009 was available when the article was published.  For promotion of their protocol and simplifying server search, Microsoft represented web-service http://messaging.office.microsoft.com/HostingProviders.aspx that helps to receive the list of providers, which supports OMS protocol and message delivery to your operator.  All you need to do is decide which provider to use, complete the agreement, then adjust the SharePoint connection parameters and start working.  Some providers advertise themselves not only for clients but for mobile operators as well. They offer automatic adding to the list of the Office Mobile Service Providers.  To view the full specifications of OMS, please go to http://msdn.microsoft.com/en-us/library/dd774103.aspx.

    Read the article

  • Limit on WMIC requests from a Windows Service

    - by Anders
    Hi all, Does anyone know if there is limit on how many wmic requests Windows can handle simultaneously if they are originating from a Windows service? The reason I'm asking is because my application fails when too many simultaneous requests have been initiated. I don't get any data back from the application. However, If I compile the Python application and run it as a stand alone application all will work fine. The wmic calls are looking like this: subprocess.Popen("wmic path Win32_PerfFormattedData_PerfOS_Memory get CommittedBytes", stdout=subprocess.PIPE, stderr=subprocess.PIPE) This makes me wonder, is there a limit Windows Services and what they can perform? I mean, if the .exe file can handle all requests, then it must be something to do with the fact that I have compiled it as a Windows service.

    Read the article

  • Multiple Denial of Service vulnerabilities in Quagga

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-4826 Denial of Service (DoS) vulnerability 3.5 Quagga Solaris 10 SPARC: 126206-09 X86: 126207-09 Solaris 11 11/11 SRU 4 CVE-2009-1572 Denial of Service (DoS) vulnerability 5.0 CVE-2010-1674 Denial of Service (DoS) vulnerability 5.0 CVE-2010-1675 Denial of Service (DoS) vulnerability 5.0 CVE-2010-2948 Denial of Service (DoS) vulnerability 6.5 CVE-2010-2949 Denial of Service (DoS) vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Implementing Service Level Agreements in Enterprise Manager 12c for Oracle Packaged Applications

    - by Anand Akela
    Contributed by Eunjoo Lee, Product Manager, Oracle Enterprise Manager. Service Level Management, or SLM, is a key tool in the proactive management of any Oracle Packaged Application (e.g., E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, Fusion Apps, etc.). The benefits of SLM are that administrators can utilize representative Application transactions, which are constantly and automatically running behind the scenes, to verify that all of the key application and technology components of an Application are available and performing to expectations. A single transaction can verify the availability and performance of the underlying Application Tech Stack in a much more efficient manner than by monitoring the same underlying targets individually. In this article, we’ll be demonstrating SLM using Siebel Applications, but the same tools and processes apply to any of the Package Applications mentioned above. In this demonstration, we will log into the Siebel Application, navigate to the Contacts View, update a contact phone record, and then log-out. This transaction exposes availability and performance metrics of multiple Siebel Servers, multiple Components and Component Groups, and the Siebel Database - in a single unified manner. We can then monitor and manage these transactions like any other target in EM 12c, including placing pro-active alerts on them if the transaction is either unavailable or is not performing to required levels. The first step in the SLM process is recording the Siebel transaction. The following screenwatch demonstrates how to record Siebel transaction using an EM tool called “OpenScript”. A completed recording is called a “Synthetic Transaction”. The second step in the SLM process is uploading the Synthetic Transaction into EM 12c, and creating Generic Service Tests. We can create a Generic Service Test to execute our synthetic transactions at regular intervals to evaluate the performance of various business flows. As these transactions are running periodically, it is possible to monitor the performance of the Siebel Application by evaluating the performance of the synthetic transactions. The process of creating a Generic Service Test is detailed in the next screenwatch. EM 12c provides a guided workflow for all of the key creation steps, including configuring the Service Test, uploading of the Synthetic Test, determining the frequency of the Service Test, establishing beacons, and selecting performance and usage metrics, just to name a few. The third and final step in the SLM process is the creation of Service Level Agreements (SLA). Service Level Agreements allow Administrators to utilize the previously created Service Tests to specify expected service levels for Application availability, performance, and usage. SLAs can be created for different time periods and for different Service Tests. This last screenwatch demonstrates the process of creating an SLA, as well as highlights the Dashboards and Reports that Administrators can use to monitor Service Test results. Hopefully, this article provides you with a good start point for creating Service Level Agreements for your E-Business Suite, Siebel, PeopleSoft, JD Edwards E1, or Fusion Applications. Enterprise Manager Cloud Control 12c, with the Application Management Suites, represents a quick and easy way to implement Service Level Management capabilities at customer sites. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Google+ |  Newsletter

    Read the article

  • VM can't ping LAN name but sees it via nslookup

    - by amphibient
    I am trying to connect to a network resource from my 10.04.4 VM (VMware Fusion) but the destination is unreachable by name. What is weird is that the name is visible in DNS: >nslookup my.name Server: 123.45.67.89 Address: 123.45.67.89#53 Non-authoritative answer: Name: my.name Address: 10.20.30.40 I can reach it (via ping) by the IP address (10.20.30.40) but not the name and I thought that was weird because the DNS clearly resolves the name. What can I do to enable access to this resource via the name?

    Read the article

  • Difference between two ways of installing tomcat as a service (Linux)

    - by varesa
    I am installing tomcat on a linux server, and would want it to be available as a service. I have found two different ways to achieve this. The first one is to copy the daemon.sh from $CATALINA_HOME/bin to /etc/init.d, and the other one I have seen is to create a simple init script that class $CATALINA_HOME/bin/startup.sh, etc. Startup.sh calls catalina.sh. The contents of the daemon.sh and startup.sh look very similar (at least for the env variables, and stuff like that). Daemon.sh calls jsvc in the end. Catalina.sh calls java. What is the (practical) difference between using the two of these when setting up tomcat as a service?

    Read the article

  • The Path to Best-In-Class Service Business Performance

    - by Charles Knapp
    What would it matter to offer your customers best-in-class service and support experiences? According to a new study, best-in-class companies enjoy margins that are nearly double the average, retain almost all of their customers each year, deliver annual revenue growth that is six greater than average, and realize cost decreases rather than increases! What does it take to become best in class? Some of the keys are: Engage customers effectively and consistently across all channels Focus on mobility to improve reactive service performance Continue to transition from primarily reactive to proactive and predictive service performance Build the support structure for new services and service contracts Construct an engaged service delivery team Join the Aberdeen Group, Oracle, Infosys, and Hyundai Capital as we highlight the key stages in the service transformation journey and reveal how Best-in-Class organizations are equipping themselves to thrive in this new era of service. Please join us for "Service Excellence and the Path to Business Transformation" -- this Thursday, October 25, 8:00 AM PDT | 11:00 AM EDT | 3:00 PM GMT | 4:00 PM BST.

    Read the article

  • android service onBind SecurityException

    - by Metalex
    I don't know why but in some devices my service isn't allowed to bind. java.lang.RuntimeException: Unable to create application mypackage.MyApplication: java.lang.SecurityException: Unable to find app for caller android.app.ApplicationThreadProxy@41680e78 (pid=16805) when binding service Intent { cmp=mypackage/.MyService } at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4394) at android.app.ActivityThread.access$1300(ActivityThread.java:141) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1294) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:137) at android.app.ActivityThread.main(ActivityThread.java:5039) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:511) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560) at dalvik.system.NativeStart.main(Native Method) Caused by: java.lang.SecurityException: Unable to find app for caller android.app.ApplicationThreadProxy@41680e78 (pid=16805) when binding service Intent { cmp=mypackage/.MyService } at android.os.Parcel.readException(Parcel.java:1425) at android.os.Parcel.readException(Parcel.java:1379) at android.app.ActivityManagerProxy.bindService(ActivityManagerNative.java:2720) at android.app.ContextImpl.bindService(ContextImpl.java:1431) at android.app.ContextImpl.bindService(ContextImpl.java:1407) at android.content.ContextWrapper.bindService(ContextWrapper.java:473) at mypackage.MyApplication.openService(MyApplication.java:151) at mypackage.MyApplication.onCreate(MyApplication.java:110) at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1000) at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4391) ... 10 more java.lang.SecurityException: Unable to find app for caller android.app.ApplicationThreadProxy@41680e78 (pid=16805) when binding service Intent { cmp=mypackage/.MyService } at android.os.Parcel.readException(Parcel.java:1425) at android.os.Parcel.readException(Parcel.java:1379) at android.app.ActivityManagerProxy.bindService(ActivityManagerNative.java:2720) at android.app.ContextImpl.bindService(ContextImpl.java:1431) at android.app.ContextImpl.bindService(ContextImpl.java:1407) at android.content.ContextWrapper.bindService(ContextWrapper.java:473) at mypackage.MyApplication.openService(MyApplication.java:151) at mypackage.MyApplication.onCreate(MyApplication.java:110) at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1000) at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4391) at android.app.ActivityThread.access$1300(ActivityThread.java:141) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1294) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:137) at android.app.ActivityThread.main(ActivityThread.java:5039) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:511) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:793) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:560) at dalvik.system.NativeStart.main(Native Method) Code from MyApplication: @Override public void onCreate() { super.onCreate(); openService(); } public void openService() { Intent service = new Intent(this, MyService.class); mConnection = new ServiceConnection() { @Override public void onServiceConnected(ComponentName name, IBinder service) { mService = IMyService.Stub.asInterface(service); if (mListener != null) { mListener.onServiceStarted(mService); } } @Override public void onServiceDisconnected(ComponentName cn) { mService = null; } }; bindService(service, mConnection, Context.BIND_AUTO_CREATE); // 151 line } Please help me! Thank you!

    Read the article

  • App.config in WCF Library and its Windows Service Host

    - by inutan
    Hello there, I have two Services called TemplateService, TemplateReportService (both defined in one WCF Service Library) to be exposed to the client application. And, I am trying to host these services under Windows Service. Can anyone please guide me if App.config in Windows Service will be same as the one in WCF Library? Here is my app.config in WCF Library: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <compilation debug="true" /> </system.web> <system.serviceModel> <services> <service behaviorConfiguration="ReportingComponentLibrary.TemplateServiceBehavior" name="ReportingComponentLibrary.TemplateService"> <endpoint address="" binding="wsHttpBinding" contract="ReportingComponentLibrary.ITemplateService" > <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" ></endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8080/ReportingComponentLibrary/TemplateService/" /> </baseAddresses> </host> </service> <service behaviorConfiguration="ReportingComponentLibrary.TemplateServiceBehavior" name="ReportingComponentLibrary.TemplateReportService"> <endpoint address="" binding="wsHttpBinding" contract="ReportingComponentLibrary.ITemplateReportService" > <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8080/ReportingComponentLibrary/TemplateReportService/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior name="ReportingComponentLibrary.TemplateServiceBehavior"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> So, the App.config in my Windows Service(Where I am hosting above two services) will be same as above or there are only some particular sections that I need to move. Please guide. Thank you!

    Read the article

  • Multiple NT service owned by the same program in Delphi

    - by Claudio
    I'm looking for Delphi sample code to develope a Win32 Windows service which can be installed many times (with different Name). The idea is to have 1 exe and 1 registry key with 1 subkey for every service to be installed. I use the exe to install/run many service, every service take his parameter from his registry subkey. Does anyone have a sample code? Many thanks Claudio

    Read the article

  • Linux service and Source for cron job

    - by Sirish Kumar
    Hi, I am new to linux and writing a service in C++ which spawns multiple threads and I am starting the service by calling it from init.d, but how should I send the terminate signal to my application from the script , so that my service terminates all the threads and exits. And also where can I find the source code for any linux services. e.g. /etc.init.d/rc5.d/S14cron . It will be helpful in understanding how to implement a service.

    Read the article

  • Auto populate input based on file name with AngularJS

    - by LouieV
    I am playing around with AngularJS and have not been able to solve this problem. I have a view that has a form to upload a file to a node server. So far I have manage to do this using some directives and a service. I allow the user to send a custom name to the POST data if they desire. What I wan to accomplish is that when the user selects a file the filename models auto populates. My view looks like: <div> <input file-model="phpFile" type="file"> <input name="filename" type="text" ng-model="filename"> <button ng-click="send()">send</button> </div> file-model is my directive that allows the file to be assigned to a scope. myApp.directive('fileModel', ['$parse', function($parse) { return { restrict: 'A', link: function(scope, element, attrs) { var model = $parse.(attrs.fileModel); var modelSetter = model.assign; element.bind('change', function() { scope.$apply(function() { modelSetter(scope, element[0].files[0]); }); }); } }]); The service: myApp.service('fileUpload', ['$http', function($http){ this.uploadFileToUrl = function(file, uploadUrl, optionals) { var fd = new FormData(); fd.append('file', file); for (var key in file) { fd.append(key, file[key]); } for(var i = 0; i < optionals.length; i++){ fd.append(optionals[i].name, optionals[i].data); } }); }]); Here as you can see I pass the file, append its properties, and append any optional properties. In the controller is where I am having the troubles. I have tried $watch and using the file-model but I get the same error either way. myApp.controller('AddCtrl', function($scope, $location, PEberry, fileUpload){ //$scope.$watch(function() { // return $scope.phpFile; //},function(newValue, oldValue) { // $scope.filename = $scope.phpFile.name; //}, true); // if ($scope.phpFiles) { // $scope.filename = $scope.phpFiles.name; // } $scope.send = function() { var uploadUrl = "/files"; var file = $scope.phpFile; //var opts = [{ name: "uname", data: file.name }] fileUpload.uploadFileToUrl(file, uploadUrl); }; }); Thank you for your help!

    Read the article

  • Call RecognizerIntent from service

    - by Tobia Loschiavo
    Hi, I am working on an Android service. I need to call RecognizerIntent from a service in order to use in the service the recognized text. I have no startActivityForResult() method in Service class so I have problem understanding how to achieve this task. Is it possible? Many thanks

    Read the article

  • When do I use Apache Kafka, Azure Service Bus, vs Azure Queues?

    - by makerofthings7
    I'm trying to understand the situations I'd use Apache Kafka, Azure Service Bus, or Azure Queues for high scale message processing. Which is better for standard Pub Sub situations? Where multiple clients get a copy of the same message? Which is better for low latency Pub sub and no durability? Which is better for "cooperating producer" and "competing consumer"? (what does this mean?) I see a bit of overlap in function between Kafka, Service Bus, Azure Queues

    Read the article

  • How to auto-install runlevel control for existing service/daemon?

    - by Johnny Utahh
    Need to install a service/daemon (in this case bind9, a DNS service) runlevel control, aka "rc" control (/etc/rc*.d and such). bind9 came pre-installed on my 11.04 system, but without aforementioned runlevel control. How to easily (and preferably automatically) install the rc stuff for "compliant" services/daemons in /etc/init.d? (Hint: I have the answer, but can't post it yet due to insufficient rep.)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >