Search Results

Search found 20515 results on 821 pages for 'wmi service'.

Page 186/821 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Need some critique on .NET/WCF SOA architecture plan

    - by user998101
    I am working on a refactoring of some services and would appreciate some critique on my general approach. I am working with three back-end data systems and need to expose an authenticated front-end API over http binding, JSON, and REST for internal apps as well as 3rd party integration. I've got a rough idea below that's a hybrid of what I have and where I intend to wind up. I intend to build guidance extensions to support this architecture so that devs can build this out quickly. Here's the current idea for our structure: Front-end WCF routing service (spread across multiple IIS servers via hardware load balancer) Load balancing of services behind routing is handled within routing service, probably round-robin One of the services will be a token Multiple bindings per-service exposed to address JSON, REST, and whatever else comes up later All in/out is handled via POCO DTOs Use unity to scan for what services are available and expose them The front-end services behind the routing service do nothing more than expose the API and do conversion of DTO<-Entity Unity inject service implementation to allow mocking automapper for DTO/Entity conversion Invoke WF services where response required immediately Queue to ESB for async WF -- ESB will invoke WF later Business logic WF layer Expose same api as front-end services Implement business logic Wrap transaction context where needed Call out to composite/atomic services Composite/Atomic Services Exposed as WCF One service per back-end system Standard atomic CRUD operations plus composite operations Supports transaction context The questions I have are: Are the separation of concerns outlined above beneficial? Current thought is each layer below is its own project, except the backend stuff, where each system gets one project. The project has a servicehost and all the services are under a services folder. Interfaces live in a separate project at each layer. DTO and Entities are in two separate projects under a shared folder. I am currently planning to build dedicated services for shared functionality such as logging and overload things like tracelistener to call those services. Is this a valid approach? Any other suggestions/comments?

    Read the article

  • Uncompiled WCF on IIS7: The type could not be found

    - by Jimmy
    Hello, I've been trying to follow this tutorial for deploying a WCF sample to IIS . I can't get it to work. This is a hosted site, but I do have IIS Manager access to the server. However, in step 2 of the tutorial, I can't "create a new IIS application that is physically located in this application directory". I can't seem to find a menu item, context menu item, or what not to create a new application. I've been right-clicking everywhere like crazy and still can't figure out how to create a new app. I suppose that's probably the root issue, but I tried a few other things (described below) just in case that actually is not the issue. This is "deployed" at http://test.com.cws1.my-hosting-panel.com/IISHostedCalcService/Service.svc . The error says: The type 'Microsoft.ServiceModel.Samples.CalculatorService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. I also tried to create a virtual dir (IISHostedCalc) in dotnetpanel that points to IISHostedCalcService . When I navigate to http://test.com.cws1.my-hosting-panel.com/IISHostedCalc/Service.svc , then there is a different error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. As per the tutorial, there was no compiling involved; I just dropped the files on the server as follow inside the folder IISHostedCalcService: service.svc Web.config Service.cs service.svc contains: <%@ServiceHost language=c# Debug="true" Service="Microsoft.ServiceModel.Samples.CalculatorService"%> (I tried with quotes around the c# attribute, as this looks a little strange without quotes, but it made no difference) Web.config contains: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.CalculatorService"> <!-- This endpoint is exposed at the base address provided by host: http://localhost/servicemodelsamples/service.svc --> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" /> <!-- The mex endpoint is explosed at http://localhost/servicemodelsamples/service.svc/mex --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Service.cs contains: using System; using System.ServiceModel; namespace Microsoft.ServiceModel.Samples { [ServiceContract] public interface ICalculator { [OperationContract] double Add(double n1, double n2); [OperationContract] double Subtract(double n1, double n2); [OperationContract] double Multiply(double n1, double n2); [OperationContract] double Divide(double n1, double n2); } public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 / n2; } } }

    Read the article

  • Deploying WCF Tutorial App on IIS7: "The type could not be found"

    - by Jimmy
    Hello, I've been trying to follow this tutorial for deploying a WCF sample to IIS . I can't get it to work. This is a hosted site, but I do have IIS Manager access to the server. However, in step 2 of the tutorial, I can't "create a new IIS application that is physically located in this application directory". I can't seem to find a menu item, context menu item, or what not to create a new application. I've been right-clicking everywhere like crazy and still can't figure out how to create a new app. I suppose that's probably the root issue, but I tried a few other things (described below) just in case that actually is not the issue. Here is a picture of what I see in IIS Manager, in case my words don't do it justice: This is "deployed" at http://test.com.cws1.my-hosting-panel.com/IISHostedCalcService/Service.svc . The error says: The type 'Microsoft.ServiceModel.Samples.CalculatorService', provided as the Service attribute value in the ServiceHost directive, or provided in the configuration element system.serviceModel/serviceHostingEnvironment/serviceActivations could not be found. I also tried to create a virtual dir (IISHostedCalc) in dotnetpanel that points to IISHostedCalcService . When I navigate to http://test.com.cws1.my-hosting-panel.com/IISHostedCalc/Service.svc , then there is a different error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Interestingly enough, if I click on View Applications, it seems like the virtual directory is an application (see image below)... although, as per the error message above, it doesn't work. As per the tutorial, there was no compiling involved; I just dropped the files on the server as follow inside the folder IISHostedCalcService: service.svc Web.config <dir: App_Code> Service.cs service.svc contains: <%@ServiceHost language=c# Debug="true" Service="Microsoft.ServiceModel.Samples.CalculatorService"%> (I tried with quotes around the c# attribute, as this looks a little strange without quotes, but it made no difference) Web.config contains: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service name="Microsoft.ServiceModel.Samples.CalculatorService"> <!-- This endpoint is exposed at the base address provided by host: http://localhost/servicemodelsamples/service.svc --> <endpoint address="" binding="wsHttpBinding" contract="Microsoft.ServiceModel.Samples.ICalculator" /> <!-- The mex endpoint is explosed at http://localhost/servicemodelsamples/service.svc/mex --> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> <system.web> <customErrors mode="Off"/> </system.web> </configuration> Service.cs contains: using System; using System.ServiceModel; namespace Microsoft.ServiceModel.Samples { [ServiceContract] public interface ICalculator { [OperationContract] double Add(double n1, double n2); [OperationContract] double Subtract(double n1, double n2); [OperationContract] double Multiply(double n1, double n2); [OperationContract] double Divide(double n1, double n2); } public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 / n2; } } }

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } The Telecom industry continues to evolve through disruptive products, uncertain markets, shorter product lifecycles and convergence of technologies. Today’s market has moved from network centric to consumer centric and focuses primarily on the customer experience. It has resulted in several product management challenges such as an increased complexity and volume of offerings, creating product variants, accelerating time-to-market, ability to provide multiple product views for varied stakeholders, leveraging OSS intelligence to BSS layer, product co-creation and increasing audit and security concerns for service providers. The document discusses how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry.   1.0.       Introduction   Figure 1: Business Scenario   Modern business demands the launch of complex products in a very short timeframe and effecting changes in the price plan faster without IT intervention. One of the key transformation initiatives companies are focusing on is in the area of product management transformation and operational efficiency improvement. As part of these initiatives, companies are investing in best- in-class COTs-based Product Management solutions developed on industry-wide standards.   The new COTs packages are planned to integrate with existing or new B/OSS systems to provide a strategic end-to-end agile solution for reduced time-to-market and order journey time. In addition, system rationalization is being undertaken to phase out legacy systems and migrate to strategic systems.   2.0.       An Overview of Product Management in Telecom   Product data in telecom is multi- dimensional and difficult to manage. It increased significantly due to the complexity of the product, product offerings on the converged network, increased volume of offerings, bundled offering structures and ever increasing regulatory requirements.   In addition, the shrinking product lifecycle in telecom makes it difficult to manage the dynamic product data. Mergers and acquisitions coupled with organic growth pose major challenges in product portfolio management. It is a roadblock in the journey towards becoming an agile organization.       Figure 2: Complexity in Product Management   Network Technology’ is the new dimension in telecom product management where the same products are realized through different networks i.e., Soiled network to Converged network. Consequently, the product solution is different.     Figure 3: Current Scenario - Pain Points in Product Management   The major business implications arising out of the current scenario are slow time-to-market and an inefficient process that affects innovation.   3.0. Transformation of Next Generation Product Management   Companies must focus on their Product Management Transformation Journey in the areas of:   ·       Management of single truth of product information across the organization/geographies which is currently managed in heterogeneous systems   ·       Management of the Intellectual Property (IP) on the product concept and partnership in the design of discrete components to integrate into the system   ·       Leveraging structured and unstructured product data within the extended enterprise to extract consumer insights and drive innovation   ·       Management of effective operational separation to comply with regulatory bodies   ·       Reuse of existing designs and add relevant features such as value-added services to enable effective product bundling     Figure 4: Next generation needs   PLM-based Enterprise Product Catalogue solutions efficiently address the above requirements and act as an enabler towards product management transformation and rapid product launch.   4.0. PLM-based Enterprise Product Management     Figure 5: PLM-based Enterprise Product Mastering   Enterprise Product Management (EPM) enables the business to manage complex product attributes of data in complex environments. Product Mastering helps create a 'single view' of the product by creating a business-driven, IT-supported environment where a global 'single truth record' is created, managed and reused.   4.1 The Business Case for Telco PLM-based solutions for Enterprise Product Management   ·       Telco PLM-based Product Mastering solutions provide a centralized authoring environment for product definition and control of all product data and rules   ·       PLM packages are designed to support multiple perspectives of product data (ordering perspective, billing perspective, provisioning perspective)   ·       Maintains relationships/links between different elements of the entire product definition   ·       Telco PLM packages are specialized in next generation lifecycle management requirements of products such as revision and state management, test and release management, role management and impact analysis)   ·       Takes into consideration all aspects of OSS product requirements compared to CRM product catalogue solutions where the product data managed is mostly order oriented and transactional     ·       New breed of Telco PLM packages are designed with 'open' standards such as SID and eTOM. They are interoperable, support integration frameworks such as subscription and notification.   ·       Telco PLM packages have developed good collaboration frameworks to integrate suppliers and partners into the product development value chain   4.2 Various Architectures/Approaches for Product Mastering using Telco PLM systems   4. 2.a Single Central Product Management (Mastering) Approach   Figure 6: Single Central Product Management (Master) Approach       This approach is implemented across verticals such as aerospace and automotive. It focuses on a physically centralized product master to which other sources are dependent on. The product definition data (Product bundles, service bundles, price plans, offers and discounts, product configuration rules and market campaigns) is created and maintained physically in a centralized environment. In addition, the product definition/authoring environment is centralized. The existing legacy product definition data available in CRM product catalogue, billing catalogue and the legacy product catalogue is migrated to the centralized PLM-based Enterprise Product Management solution.   Architectural changes must be made in the existing business landscape of applications to create and revise data because the applications have to refer to the central repository for approvals and validation of product configurations. It is achieved by modifying how the applications write data or how the applications can be adapted to use the rules to be managed and published.   Complete product configuration validation will be done in enterprise / central product catalogue and final configuration will be sent to the B/OSS system through the SOA compliant product distribution architecture. The approach/architecture enables greater control in terms of product data management and product data governance.   4.2.b Federated Product Management (Mastering) Architecture     Figure 7: Federated Product Management (Mastering) Architecture   In the federated product mastering approach, the basic unique product definition data (product id, description product hierarchy, basic price plans and simple product design rules) will be centrally created and will be maintained. And, the advanced product definition (Product bundling, promotions, offers & discount plans) will be created in respective down stream OSS systems. The advanced product definition (Product bundling, promotions, offers and discount plans) will be created in respective downstream OSS systems.   For example, basic product definitions such as attributes, product hierarchy and basic price plans will be created and maintained in Enterprise/Central product reference catalogue and distributed to downstream OSS systems. Respective downstream OSS systems build product bundles, promotions, advanced price plans over the basic product definition and master the advanced product definition. Central reference database accesses the respective other source product master data and assembles a point-in-time consolidated view of the product. The approach is typically adapted in some merger and acquisition scenarios where there is a low probability of a central physical authority managing the data. In addition, the migration effort in this case is minimal and there are no big architectural changes to the organization application landscape. However, this approach will not result in better product data management and data governance.   5.0 Customer Scenario – Before EPC deployment   A leading global telecommunications service provider wanted to launch a quad play and triple play service offering in the shortest possible lead time. The service provider was offering Broadband and VoIP services to customers. The company wanted to reuse a majority of the Broadband services and price plans and bundle them with new wireless and IPTV services for quad play and triple play. The challenges in launching the new service offerings were:       Figure 8: Triple Play Plan   ·       Broadband product data was stored in multiple product catalogues (CRM catalogue, Billing catalogue, spread sheets)   ·       Product managers spent a lot of time performing tasks involving duplication or re-keying of data. Manual effort caused errors, cost and time over-runs.   ·       No effective product and price data governance mechanism. Price change issues arising from the lack of data consistency across systems resulted in leakage of customer value and revenue.   ·       Product data had re-usability issues and was not in a structured format. It resulted in uncontrolled product portfolio creation and product management issues.   ·       Lack of enterprise product model resulted into product distribution challenges and thus delays in product launch.   ·       Designers are constrained by existing legacy product management solutions to model product/service requirements and product configuration rules such as upgrading, downgrading and cross selling.    5.1 Customer Scenario - After EPC deployment     Figure 9: SOA-based end-to-end EPC Solution   The company deployed PLM-based Enterprise Product Catalogue solutions to launch quad play service after evaluating various product catalogues. The broadband product offering, service and price data were migrated to the new system, and the product and price plan hierarchy for new offerings were created using the entities defined in the Enterprise Product Model. Supplier product catalogue data such as routers and set up boxes were loaded onto the new solution through SOA-based web service. Price plans and configuration rules were built in the new system. The validated final product configurations were extracted from the product catalogue in a SID format and were distributed to the downstream B/OSS systems through exposed SOA-based web services. The transformations required for the B/OSS system were handled using the transformation layer as part of the solution.   6.0 How PLM enabled Product Management Transformation         Figure 10: Product Management Transformation     PLM-based Product Catalogue Solution helped the customer reduce the product launch cycle time by 30% and enable transformation of Product Management for next generation services.   7.0 Conclusion   On the one hand, the telecom industry is undergoing changes due to disruptions, uncertain product markets and increased complexity of products. On the other hand, the ARPU is decreasing year-on-year. Communications Service Providers are embarking on convergence, bundled service offerings, flexibility to cross-sell and up-sell, introduce new value-added services, leverage Web 2.0 concepts and network capabilities. Consequently, large scale IT transformation initiatives to improve their ARPU supporting network and business transformations are a business imperative. Product Management has become a focus area. Companies are investing in best-in- class COTS solutions to reduce time-to-market, ensure rapid service delivery and improve operational efficiency. An efficient PLM-based enterprise product mastering solution plays a key role in achieving zero touch automation and rapid product launch.   References:   1.     Preston G.Smith, Donald G.Reineristsem, Van Nostrand Reinhold “Developing Products in Half the time”.   2.     John G. Innes, "Achieving Successful Product Change", Pitman Publishing.   3.     D T Pham and R M Setchi (16th Jan, 2001) "Authoring environment for documentation development" University of Wales Cardiff, U.K., Proceedings on Institution of Mechanical Engineers, Vol. 215, Part B.   4.     Oracle Product Hub for Communications:   http://www.oracle.com/us/products/applications/master-data-management/product-hub-082059.html  

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (II)

    - by ccasares
    Retrieving uploaded attachments -UCM- As stated in my previous blog entry, Oracle BPM 11g 11.1.1.5.1 (aka PS4FP) introduced a new cool feature whereby you can use Oracle WebCenter Content (previously known as Oracle UCM) as the repository for the human task attached documents. For more information about how to use or enable this feature, have a look here. The attachment scope (either TASK or PROCESS) also applies to UCM-attachments. But even with this other feature, one question might arise when using UCM attachments. How can I get them from within the process? The first answer would be to use the same getTaskAttachmentContents() XPath function already explained in my previous blog entry. In fact, that's the way it should be. But in Oracle BPM 11g 11.1.1.5.1 (PS4FP) and 11.1.1.6.0 (PS5) there's a bug that prevents you to do that. If you invoke such function against a UCM-attachment, you'll get a null content response (bug#13907552). Even if the attachment was correctly uploaded. While this bug gets fixed, next I will show a workaround that lets me to retrieve the UCM-attached documents from within a BPM process. Besides, the sample will show how to interact with WCC API from within a BPM process.Aside note: I suggest you to read my previous blog entry about Human Task attachments where I briefly describe some concepts that are used next, such as the execData/attachment[] structure. Sample Process I will be using the following sample process: A dummy UserTask using "HumanTask2" Human Task, followed by an Embedded Subprocess that will retrieve the attachments payload. In this case, and here's the key point of the sample, we will retrieve such payload using WebCenter Content WebService API (IDC): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail:  We will use the same attachmentCollection XSD structure and same BusinessObject definition as in the previous blog entry. However we create a separate variable, named attachmentUCM, based on such BusinessObject. We will still need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a new variable of type TaskExecutionData (different one than the other used for non-UCM attachments): As in the non-UCM attachments flow, in the output tab of the UserTask mapping, we'll keep a copy of the execData structure: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentUCM variable with the following information: The name of each attachment (from execData/attachment/name element) The WebCenter Content ID of the uploaded attachment. This info is stored in execData/attachment/URI element with the format ecm://<id>. As we just want the numeric <id>, we need to get rid of the protocol prefix ("ecm://"). We do so with some XPath functions as detailed below: with these two functions being invoked, respectively: We, again, set the target payload element with an empty string, to get the <payload></payload> tag created. The complete XSLT transformation is shown below. Remember that we're using the XSLT for-each node to create as many target structures as necessary.  Once we have fed the attachmentsUCM structure and so it now contains the name of each of the attachments along with each WCC unique id (dID), it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsUCM/attachment[] element: In each iteration we will use a Service activity that invokes WCC API through a WebService. Follow these steps to create and configure the Partner Link needed: Login to WCC console with an administrator user (i.e. weblogic). Go to Administration menu and click on "Soap Wsdls" link. We will use the GetFile service to retrieve a file based on its dID. Thus we'll need such service WSDL definition that can be downloaded by clicking the GetFile link. Save the WSDL file in your JDev project folder. In the BPM project's composite view, drag & drop a WebService adapter to create a new External Reference, based on the just added GetFile.wsdl. Name it UCM_GetFile. WCC services are secured through basic HTTP authentication. Therefore we need to enable the just created reference for that: Right-click the reference and click on Configure WS Policies. Under the Security section, click "+" to add the "oracle/wss_username_token_client_policy" policy The last step is to set the credentials for the security policy. For the sample we will use the admin user for WCC (weblogic/welcome1). Open the composite.xml file and select the Source view. Search for the UCM_GetFile entry and add the following highlighted elements into it:   <reference name="UCM_GetFile" ui:wsdlLocation="GetFile.wsdl">     <interface.wsdl interface="http://www.stellent.com/GetFile/#wsdl.interface(GetFileSoap)"/>     <binding.ws port="http://www.stellent.com/GetFile/#wsdl.endpoint(GetFile/GetFileSoap)"                 location="GetFile.wsdl" soapVersion="1.1">       <wsp:PolicyReference URI="oracle/wss_username_token_client_policy"                            orawsp:category="security" orawsp:status="enabled"/>       <property name="weblogic.wsee.wsat.transaction.flowOption"                 type="xs:string" many="false">WSDLDriven</property>       <property name="oracle.webservices.auth.username"                 type="xs:string">weblogic</property>       <property name="oracle.webservices.auth.password"                 type="xs:string">welcome1</property>     </binding.ws>   </reference> Now the new external reference is ready: Once the reference has just been created, we should be able now to use it from our BPM process. However we find here a problem. The WCC GetFile service operation that we will use, GetFileByID, accepts as input a structure similar to this one, where all element tags are optional: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">    <get:dID>?</get:dID>   <get:rendition>?</get:rendition>   <get:extraProps>      <get:property>         <get:name>?</get:name>         <get:value>?</get:value>      </get:property>   </get:extraProps></get:GetFileByID> and we need to fill up just the <get:dID> tag element. Due to some kind of restriction or bug on WCC, the rest of the tag elements must NOT be sent, not even empty (i.e.: <get:rendition></get:rendition> or <get:rendition/>). A sample request that performs the query just by the dID, must be in the following format: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">   <get:dID>12345</get:dID></get:GetFileByID> The issue here is that the simple mapping in BPM does create empty tags being a sample result as follows: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/"> <get:dID>12345</get:dID> <get:rendition/> <get:extraProps/> </get:GetFileByID> Although the above structure is perfectly valid, it is not accepted by WCC. Therefore, we need to bypass the problem. The workaround we use (many others are available) is to add a Mediator component between the BPM process and the Service that simply copies the input structure from BPM but getting rid of the empty tags. Follow these steps to configure the Mediator: Drag & drop a new Mediator component into the composite. Uncheck the creation of the SOAP bindings and use the Interface Definition from WSDL template and select the existing GetFile.wsdl Double click in the mediator to edit it. Add a static routing rule to the GetFileByID operation, of type Service and select References/UCM_GetFile/GetFileByID target service: Create the request and reply XSLT mappers: Make sure you map only the dID element in the request: And do an Auto-mapper for the whole response: Finally, we can now add and configure the Service activity in the BPM process. Drag & drop it to the embedded subprocess and select the NormalizedGetFile service and getFileByID operation: Map both the input: ...and the output: Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsUCM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsUCM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target): Second, we must set the target filename using the Service Properties dialog box: Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Final blog entry about attachments will handle how to inject documents to Human Tasks from the BPM process and how to share attachments between different User Tasks. Will come soon. Again, once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • WCF Error: the client and service bindings may be mismatched?

    - by Rev
    Hi let see server config and client config. Then help me find difference between these configs!! Client config <system.serviceModel> <client> <endpoint address="http://localhost/admin2/AdminCentralService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="AdminCentralServiceConfig" /> <endpoint binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="CommandInvokerConfig" /> </client> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> Server Config <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AdminCentral.Business.Web.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="AdminCentral.Business.Web.Service1Behavior" name="AdminCentral.Business.Web.AdminCentralService"> <endpoint address="" binding="wsHttpBinding" contract="AdminCentral.Business.Web.ICommandInvoker"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • wcf erorr: The client and service bindings may be mismatched?

    - by Rev
    Hi let see server config and client config. Then help me find difference between these configs!! Client config <system.serviceModel> <client> <endpoint address="http://localhost/admin2/AdminCentralService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="AdminCentralServiceConfig" /> <endpoint binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_Config" contract="TIR.ThreeTier.ICommandInvoker" name="CommandInvokerConfig" /> </client> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true" /> </security> </binding> </wsHttpBinding> </bindings> Server Config <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="AdminCentral.Business.Web.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_Config" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647"/> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false"/> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm=""/> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" establishSecurityContext="true"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service behaviorConfiguration="AdminCentral.Business.Web.Service1Behavior" name="AdminCentral.Business.Web.AdminCentralService"> <endpoint address="" binding="wsHttpBinding" contract="AdminCentral.Business.Web.ICommandInvoker"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services>

    Read the article

  • How to change webservice url endpoint ?

    - by EugeneP
    I generated a web-service client using JBoss utils (JAX-WS compatible) using Eclipse 'web service client from a wsdl'. So, the only thing I provided was a url to a web-service. Now, the web service provider tells me to change the "client endpoint" of the web-service. What is it and how to change it?

    Read the article

  • JbossESB jmsProvider cannot convert IBMMQ JMS Message JMSTextMessage

    - by Himanshu
    I am trying to integrate IBMMQ v6.0.2 with jbossESB. we have local Queue available on IBMMQ on one of our QA QUEUEMANAGER. I am able to listen to the QUEUE using JMSprovider of jboss ESB. As soon as a message (of type jms_text ) is dropped , esb listen to it and pick it up and before it hit the next action it throws following error message. ERROR [JmsComposer] Unsupported JMS message type: com.ibm.jms.JMSTextMessage Here are the steps I followed. jboss-service.mxl : Defined Connection Factory and QUEUE added jars ( com.ibm.mq.* ) to ${jbossesb}/server/${mynode}/lib Added jms lsinterner configuration on jboss-esb.xml Please guide me what I m missing here... Do I need to create custom MessagePlugin ? jboss-esb looks like this <jms-provider name="WSMQ" connection-factory="MQQueueConnectionFactory"> <jms-bus busid="queuestartGwChannel"> <jms-message-filter dest-type="QUEUE" dest-name="wsmq/SerivceOrderQueue" acknowledge-mode ="AUTO_ACKNOWLEDGE" /> </jms-bus> <jms-bus busid="queuestartEsbChannel"> <jms-message-filter dest-type="QUEUE" dest-name="wsmq/SerivceOrderQueue" /> </jms-bus> </jms-provider> jboss-service.xml looks like this <mbean code="jmx.service.wsmq.WSMQConnectionFactory" name="jmx.service.wsmq:service=MQQueueConnectionFactory"> <attribute name="JndiName">MQQueueConnectionFactory</attribute> <attribute name="JMSStyle">Queue</attribute> <attribute name="IsXA">false</attribute> <attribute name="QueueManagerName">SQAT0083</attribute> <attribute name="HostName">111.111.111.111</attribute> <attribute name="Port">1415</attribute> <attribute name="Channel">MYCO.SVRCONN</attribute> <attribute name="TransportType">CLIENT</attribute> <depends>jboss:service=Naming</depends> </mbean> <mbean code="jmx.service.wsmq.WSMQDestination" name="jmx.service.wsmq:service=WSMQRequestQueue"> <attribute name="JndiName">wsmq/SerivceOrderQueue</attribute> <attribute name="JMSStyle">Queue</attribute> <attribute name="QueueManagerName">SQAT0083</attribute> <attribute name="DestinationName">MYCO.SERVICEORDER.QA01.QL01</attribute> <attribute name="TargetClient">MQ</attribute> <depends>jboss:service=Naming</depends> </mbean> I am using jboss-eap-4.3. Really appreciate any help.

    Read the article

  • How to Consume a WebService(created by C#) using Https protocol

    - by Navaneeth A Krishnan
    I'm developing a small project, that is an C# web service, i did that but now i want to run the web service using the protocol HTTPS, for that i have installed web authentication certificate in my system and my IIS 5.1 server is running under HTTPS protocol(i have configured in that directory security) But now i want to invoke the web service using the HTTPS protocol, somebody told that, i need to modify the WSDL file for that web service but i don't know how to do it... now my service url is like this.... http://localhost:2335/SWebService.asmx here i would like to use https instead of http

    Read the article

  • django sort by manytomany relationship

    - by Marconi
    I have the following model: class Service(models.Model): ratings = models.ManyToManyField(User) Now if I wanna get all the service with ratings sorted in descending order I did something: services_list = Service.objects.filter(ratings__gt=0).distinct() services_list = list(services_list) services_list.sort(key=lambda service: service.ratings.all().count(), reverse=True) As you can see its a three step process and I don't feel right about this. Anybody who knows a better way to do this?

    Read the article

  • WCF proxy: Do I need to create a new and different proxy for each binding?

    - by WCFDeveloper
    Hi, Let's say that I have created a WCF proxy from a WCF service (which is configured with wsHttpBinding) using Add Service (in Visual Studio 2008). Later I want to use basicHttpBinding so I'll go and change the WCF service to use basicHttpBinding. But what about the WCF proxy? Can I just change this via Web.config or do I need to create the WCF proxy again from the WCF service via Add Service? Thanks

    Read the article

  • Making a WCF call with AJAX

    - by DotnetDude
    Is it required to use a RESTful service to be able to make a ajax call to a wcf service (for example: by using WebInvoke attribute on Operation contracts) Once a service is made RESTful by adding a webHttp binding on the service host, can the host have other endpoints as well? (wsHttp or netTcp) Is it required that the aspNetCompatibilityEnabled be set to true for a service that has webHttp binding (and can this setting coexist for other endpoints) I understand I can use both JQuery and ScriptManager for making WCF calls on the client. Why should I use one over the other?

    Read the article

  • Is it possible in .NET to set the local endpoint (IP address) when using webclient to consume a web

    - by Tom
    I'm wondering if it is possible using .NET to call a remote web service and in effect specify which IP the call is made on. I'm consuming a service that limits the number of calls I can make based on IP. The service costs in the 20k range after the free limit is used up. I'm very close to enough calls but not quite there using the free service. My server has 3 IP so I could in effect triple the number of calls I could make to the remote service by changing the IP.

    Read the article

  • communication foundation showing plain text / code behind

    - by Michel
    Hi, i have a wcf service which runs perfectly on my dev machine (vs2010, target 3.5) but once deployed, it shows me the code behind of the service (actually the plain text of the .svc file) and not the normal service page: <%@ ServiceHost Language="C#" Debug="true" Service="SilverlightPoc.Web.FinanceData" CodeBehind="FinanceData.svc.cs" %> Anyone any idea why the .svc file is rendered as plain text and not as wcf service?

    Read the article

  • Throwing exception or retuning value

    - by user274737
    I am using another service in a Service Oriented Architecture. My service used the other service to save data into the database. Is is good practice for me to rethrow the exception which i get from save service or should i catch the exception and encapsulate it in my result and then just send the result back.

    Read the article

  • Oracle Unveils Oracle Social Relationship Management Suite at Oracle OpenWorld

    - by Richard Lefebvre
    New Service Enables Companies to Listen, Engage, Create, Market and Analyze Interactions across Multiple Social Platforms in Real-Time During his keynote presentation, Oracle CEO Larry Ellison announced the Oracle Social Relationship Management (SRM) Suite.   Oracle Social Relationship Management Suite is an integrated enterprise service that enables companies to listen, engage, create, market, and analyze interactions across multiple social platforms in real-time providing a holistic view of the consumer.   Oracle Social Relationship Management Suite is integrated with Oracle’s enterprise applications, including Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce, and Oracle Enterprise Resource Planning (ERP), allowing organizations to use social to transform their corporate business processes and systems.   Additionally, Oracle Social Relationship Management Suite is integrated with Oracle Platform Services, including Oracle Java Cloud Service and Oracle Database Cloud Service, enabling marketing teams to integrate social with their custom Web pages, landing pages and marketing tools. Unleashing the Power of Social • Providing a holistic view of consumer interactions, Oracle Social Relationship Management Suite includes: Oracle Social Network (OSN): Provides a secure collaboration platform that supports real-time collaboration and networking for users inside and outside the organization. Oracle Social Marketing: Enables marketers to centrally create, publish, moderate, manage, measure and report across multiple social campaigns and platforms. It also helps marketers publish social content, engage fans and customize their brand's look and feel. Oracle Social Engagement & Monitoring Cloud Service: Enables organizations to analyze social media interactions while also empowering customer service and sales teams to effectively engage with customers and prospects. It gives organizations the tools they need to understand customers and take the appropriate actions by monitoring, listening, learning, and responding to signals and trends across the social web. Oracle Social Sites: provides brands and agencies a powerful and rich editing experience that end users can leverage to dynamically develop and launch social sites. Oracle Data and Insights. A service that caters to a growing enterprise need for externally information by providing information, directory and insights about common business entities. Supporting Quote “By fundamentally changing the way organizations connect with their different stakeholders, social is changing the rules of business,” said Thomas Kurian, executive vice president, Oracle Product Development. “With the Oracle Social Relationship Management Suite we are empowering our customers to embrace this change by integrating the tools required to listen, engage, create, market and analyze social interactions into existing applications and services.”

    Read the article

  • Oracle collaborates with leading IT vendors on Cloud Management Standards

    - by Anand Akela
    During the last couple of days, two key specifications for cloud management standards have been announced. Oracle collaborated with leading technology vendors from the IT industry on both of these cloud management specifications. One of the specifications focuses "Infrastructure as a Service" ( IaaS )  cloud service model , while the other specification announced today focuses on "Platform as a Service" ( PaaS ) cloud service model. Please see The NIST Definition of Cloud Computing to learn more about IaaS and PaaS . Earlier today Oracle , CloudBees, Cloudsoft, Huawei, Rackspace, Red Hat, and Software AG   announced the Cloud Application Management for Platforms (CAMP) specification that will be submitted to Organization for the Advancement of Structured Information Standards (OASIS) for development of an industry standard, in an effort to help ensure interoperability for deploying and managing applications across cloud environments.  Typical PaaS architecture - Source : CAMP specification The CAMP specification defines the artifacts and APIs that need to be offered by a PaaS cloud to manage the building, running, administration, monitoring and patching of applications in the cloud. Its purpose is to enable interoperability among self-service interfaces to PaaS clouds by defining artifacts and formats that can be used with any conforming cloud and enable independent vendors to create tools and services that interact with any conforming cloud using the defined interfaces. Cloud vendors can use these interfaces to develop new PaaS offerings that will interact with independently developed tools and components. In a separate cloud standards announcement yesterday, the Distributed Management Task Force ( DMTF ), the organization bringing the IT industry together to collaborate on systems management standards development, validation, promotion and adoption, released the new Cloud Infrastructure Management Interface (CIMI) specification. Oracle collaborated with various technology vendors and industry organizations on this specification. CIMI standardizes interactions between cloud environments to achieve interoperable cloud infrastructure management between service providers and their consumers and developers, enabling users to manage their cloud infrastructure use easily and without complexity. DMTF developed CIMI as a self-service interface for infrastructure clouds ( IaaS focus ) , allowing users to dynamically provision, configure and administer their cloud usage with a high-level interface that greatly simplifies cloud systems management. Mark Carlson, Principal Cloud Strategist at Oracle provides more details about CAMP  and CIMI his blog . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 3: ADFS Setup

    - by Your DisplayName here!
    In part 1 of this series I briefly gave an overview of the ADFS / WS-Trust infrastructure. In part 2 we created a basic WCF service that uses ADFS for authentication. This part will walk you through the steps to register the service in ADFS 2. I could provide screenshots for all the wizard pages here – but since this is really easy – I just go through the necessary steps in textual form. Step 1 – Select Data Source Here you can decide if you want to import a federation metadata file that describes the service you want to register. In that case all necessary information is inside the metadata document and you are done. FedUtil (a tool that ships with WIF) can generate such metadata for the most simple cases. Another tool to create metadata can be found here. We choose ‘Manual’ here. Step 2 – Specify Display Name I guess that’s self explaining. Step 3 – Choose Profile Choose ‘ADFS 2 Profile’ here. Step 4 – Configure Certificate Remember that we specified a certificate (or rather a private key) to be used to decrypting incoming tokens in the previous post. Here you specify the corresponding public key that ADFS 2 should use for encrypting the token. Step 5 – Configure URL This page is used to configure WS-Federation and SAML 2.0p support. Since we are using WS-Trust you can leave both boxes unchecked. Step 6 – Configure Identifier Here you specify the identifier (aka the realm, aka the appliesTo) that will be used to request tokens for the service. This value will be used in the token request and is used by ADFS 2 to make a connection to the relying party configuration and claim rules. Step 7 – Configure Issuance Authorization Rules Here you can configure who is allowed to request token for the service. I won’t go into details here how these rules exactly work – that’s for a separate blog post. For now simply use the “Permit all users” option. OK – that’s it. The service is now registered at ADFS 2. In the next part we will finally look at the service client. Stay tuned…

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • One Week To Go: OTN Architect Day: Cloud Computing

    - by Bob Rhubart
    One week remains until OTN Architect Day: Cloud Computing kicks of at the spectacular Oracle HQ campus in Redwood Shores, CA. The event is free, and there is still time to register. When: Tuesday July 9, 2013 8:30am - 12:30pm Where: Oracle Conference Center350 Oracle Pkwy Redwood City, CA 94065 Register now. It's free! Here's the latest update to the event agenda: 8:30am - 9:00am Registration and Continental Breakfast 9:00am - 9:45am Keynote 21st Century IT | Dr. James Baty VP, Global Enterprise Architecture Program, Oracle Imagine a time long, long ago. A time when servers were certified and dedicated to specific applications, when anything posted on an enterprise web site was from restricted, approved channels, and when we tried to limit the growth of 'dirty' data and storage. Today, applications are services running in the muti-tenant hybrid cloud. Companies beg their customers to tweet them, friend them, and publicly rate their products. And constantly analyzing a deluge of Internet, social and sensor data is the key to creating the next super-successful product, or capturing an evil terrorist. The old IT architecture was planned, dedicated, stable, controlled, with separate and well-defined roles. The new architecture is shared, dynamic, continuous, XaaS, DevOps. This keynote session describes the challenges and opportunities that the new business / IT paradigms present to the IT architecture and architects. 9:45am - 10:30am Technical Session Oracle Cloud: A Case Study in Building a Cloud | Anbu Krishnaswami Enterprise Architect, Oracle Building a Cloud can be challenging thanks to the complex requirements unique to Cloud computing and the massive scale typically associated with Cloud. Cloud providers can take an Infrastructure as a Service (IaaS) approach and build a cloud on virtualized commodity hardware, or they can take the Platform as a Service (PaaS) path, a service-oriented approach based on pre-configured, integrated, engineered systems. This presentation uses the Oracle Cloud itself as a case study in the use of engineered systems, demonstrating how the technical design of engineered systems is leveraged for building PaaS and SaaS Cloud services and a Cloud management infrastructure. The presentation will also explore the principles, patterns, best practices, and architecture views provided in Oracle's Cloud reference architecture. 10:30 am -10:45 am Break 10:45am-11:30am Technical Session Database as a Service | Markus Michalewicz Senior Principal Product Manager Oracle Real Application Clusters (RAC) New applications are now commonly built in a Cloud model, where the database is consumed as a service, and many established business processes are beginning to migrate to database as a service (DBaaS). This adoption of DBaaS is made possible by the availability of new capabilities in the database that enable resource pooling, dynamic resource management, model-based provisioning, metered use, and effective quality-of-service controls. This session will examine the catalog of database services at a large commercial bank to understand how these capabilities are enabling DBaaS for a wide range of needs within the enterprise. 11:30 am - 12:00 pm Panel Q&A Dr. James Baty, Anbu Krishnaswami, and Markus Michalewicz respond to audience questions. Registration is free, but seating is limited, so register now.

    Read the article

  • Is this Ubuntu One DBus signal connection code correct?

    - by Chris Wilson
    This is my first time using DBus so I'm not entirely sure if I'm going about this the right way. I'm attempting to connect the the Ubuntu One DBus service and obtain login credentials for my app, however the slots I've connected to the DBus return signals detailed here never seem to be firing, despite a positive result being returned during the connection. Before I start looking for errors in the details relating to this specific service, could someone please tell me if this code would even work in the first place, or if I'm done something wrong here? int main() { UbuntuOneDBus *u1Dbus = new UbuntuOneDBus; if( u1Dbus->init() ){ qDebug() << "Message queued"; } } UbuntuOneDBus::UbuntuOneDBus() { service = "com.ubuntuone.Credentials"; path = "/credentials"; interface = "com.ubuntuone.CredentialsManagement"; method = "register"; signature = "a{ss} (Dict of {String, String})"; connectReturnSignals(); } bool UbuntuOneDBus::init() { QDBusMessage message = QDBusMessage::createMethodCall( service, path, interface, method ); bool queued = QDBusConnection::sessionBus().send( message ); return queued; } void UbuntuOneDBus::connectReturnSignals() { bool connectionSuccessful = false; connectionSuccessful = QDBusConnection::sessionBus().connect( service, path, interface, "CredentialsFound", "a{ss} (Dict of {String, String})", this, SLOT( credentialsFound() ) ); if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsFound signal failed"; connectionSuccessful = QDBusConnection::systemBus().connect( service, path, interface, "CredentialsNotFound", "(nothing)", this, SLOT( credentialsNotFound() ) ); if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsNotFound signal failed"; connectionSuccessful = QDBusConnection::systemBus().connect( service, path, interface, "CredentialsError", "a{ss} (Dict of {String, String})", this, SLOT( credential if( ! connectionSuccessful ) qDebug() << "Connection to DBus::CredentialsError signal failed"; } void UbuntuOneDBus::credentialsFound() { std::cout << "Credentials found" << std::endl; } void UbuntuOneDBus::credentialsNotFound() { std::cout << "Credentials not found" << std::endl; } void UbuntuOneDBus::credentialsError() { std::cout << "Credentials error" << std::endl; }

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Using chunked encoding in a POST request to an asmx web service on IIS 6 generates a 404

    - by user175869
    Hi, I'm using a CXF client to communicate with a .net web service running on IIS 6. This request (anonymised): POST /EngineWebService_v1/EngineWebService_v1.asmx HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "http://.../Report" Accept: */* User-Agent: Apache CXF 2.2.5 Cache-Control: no-cache Pragma: no-cache Host: uat9.gtios.net Connection: keep-alive Transfer-Encoding: chunked followed by 7 chunks of 4089 bytes and one of 369 bytes, generates the following output after the first chunk has been sent: HTTP/1.1 404 Not Found Content-Length: 103 Date: Wed, 10 Feb 2010 13:00:08 GMT Connection: Keep-Alive Content-Type: text/html Anyone know how to get IIS to accept chunked input for a POST? Thanks

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >