Search Results

Search found 13906 results on 557 pages for 'performancepoint services'.

Page 185/557 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution

    - by Kellsey Ruppel
    Siemens AG, Sector Healthcare, Increases Transparency and Improves Customer Loyalty with Web Portal Solution CUSTOMER AND PARTNER INFORMATION Customer Name – Siemens AG, Sector Healthcare Customer Revenue – 73,515 Billion Euro (2011, Siemens AG total) Customer Quote – “The realization of our complex requirements within a very short amount of time was enabled through the competent implementation partner Sapient, who fully used the  very broad scope of standard functionality provided in the Oracle WebCenter Portal, and the management of customer services, who continuously supported the project setup. ” – Joerg Modlmayr, Project Manager, Healthcare Customer Service Portal, Siemens AG The Siemens Healthcare Sector is one of the world's largest suppliers to the healthcare industry and a trendsetter in medical imaging, laboratory diagnostics, medical information technology and hearing aids. Siemens offers its customers products and solutions for the entire range of patient care from a single source – from prevention and early detection to diagnosis, and on to treatment and aftercare. By optimizing clinical workflows for the most common diseases, Siemens also makes healthcare faster, better and more cost-effective. To ensure greater transparency, increased efficiency, higher user acceptance, and additional services, Siemens AG, Sector Healthcare, replaced several existing legacy portal solutions that could not meet the company’s future needs with Oracle WebCenter Portal. Various existing portal solutions that cannot meet future demands will be successively replaced by the new central service portal, which will also allow for the efficient and intuitive implementation of new service concepts.  With Oracle, doctors and hospitals using Siemens medical solutions now have access to a central information portal that provides important information and services at just the push of a button.  Customer Name – Siemens AG, Sector Healthcare Customer URL – www.siemens.com Customer Headquarters – Erlangen, Germany Industry – Industrial Manufacturing Employees – 360,000  Challenges – Replace disparate medical service portals to meet future demands and eliminate an  unnecessarily high level of administrative work caused by heterogeneous installations Ensure portals meet current user demands to improve user-acceptance rates and increase number of total users Enable changes and expansion through standard functionality to eliminate the need for reliance on IT and reduce administrative efforts and associated high costs Ensure efficient and intuitive implementation of new service concepts for all devices and systems Ensure hospitals and clinics to transparently monitor and measure services rendered for the various medical devices and systems  Increase electronic interaction and expand services to achieve a higher level of customer loyalty Solution –  Deployed Oracle WebCenter Portal to ensure greater transparency, and as a result, a higher level of customer loyalty  Provided a centralized platform for doctors and hospitals using Siemens’ medical technology solutions that provides important information and services at the push of a button Reduced significantly the administrative workload by centralizing the solution in the new customer service portal Secured positive feedback from customers involved in the pilot program developed by design experts from Oracle partner Sapient. The interfaces were created with customer needs in mind. The first survey taken shortly after implementation came back with 2.4 points on a scale of 0-3 in the category “customer service portal intuitiveness level” Met all requirements including alignment with the Siemens Style Guide without extensive programming Implemented additional services via the portal such as benchmarking options to ensure the optimal use of the Customer Device Park Provided option for documentation of all services rendered in conjunction with the medical technology systems to ensure that the value of the services are transparent for the decision makers in the hospitals  Saved and stored all machine data from approximately 100,000 remote systems in the central service and information platform Provided the option to register errors online and follow the call status in real-time on the portal Made  available at the push of a button all information on the medical technology devices used in hospitals or clinics—from security checks and maintenance activities to current device statuses Provided PDF format Service Performance Reports that summarize information from periods of time ranging from previous weeks up to one year, meeting medical product law requirements  Why Oracle – Siemens AG favored Oracle for many reasons, however, the company ultimately decided to go with Oracle due to the enormous range of functionality the solutions offered for the healthcare sector.“We are not programmers; we are service providers in the medical technology segment and focus on the contents of the portal. All the functionality necessary for internet-based customer interaction is already standard in Oracle WebCenter Portal, which is a huge plus for us. Having Oracle as our technology partner ensures that the product will continually evolve, providing a strong technology platform for our customer service portal well into the future,” said Joerg Modlmayr project manager, Healthcare Customer Service Portal, Siemens AG. Partner Involvement – Siemens AG selected Oracle Partner Sapient because the company offered a service portfolio that perfectly met Siemens’ requirements and had a wealth of experience implementing Oracle WebCenter Portal. Additionally, Sapient had designers with a very high level of expertise in usability—an aspect that Siemens considered to be of vast importance for the project.  “The Sapient team completely met all our expectations. Our tightly timed project was completed on schedule, and the positive feedback from our users proves that we set the right measures in terms of usability—all thanks to the folks at Sapient,” Modlmayr said.  Partner Name – Sapient GmbH Deutschland Partner URL – www.sapient.com

    Read the article

  • Disabling CPU management

    - by Tiffany Walker
    If I add the following processor.max_cstate=0 to the kernel command line for boot up, does that disable all CPU power management and throttling? I also found: http://www.experts-exchange.com/OS/Linux/Administration/A_3492-Avoiding-CPU-speed-scaling-in-modern-Linux-distributions-Running-CPU-at-full-speed-Tips.html The link talks of Change CPU governor from 'ondemand' to 'performance' for all CPUs/cores and disabling kondemand from kernel. Server is for web hosting UPDATES: 2.6.32-379.1.1.lve1.1.7.6.el6.x86_64 #1 SMP Sat Aug 4 09:56:37 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux . # dmidecode 2.11 SMBIOS 2.6 present. 74 structures occupying 2878 bytes. Table at 0x0009F000. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: American Megatrends Inc. Version: 1.0c Release Date: 05/27/2010 Address: 0xF0000 Runtime Size: 64 kB ROM Size: 4096 kB Characteristics: ISA is supported PCI is supported PNP is supported BIOS is upgradeable BIOS shadowing is allowed ESCD support is available Boot from CD is supported Selectable boot is supported BIOS ROM is socketed EDD is supported 5.25"/1.2 MB floppy services are supported (int 13h) 3.5"/720 kB floppy services are supported (int 13h) 3.5"/2.88 MB floppy services are supported (int 13h) Print screen service is supported (int 5h) 8042 keyboard services are supported (int 9h) Serial services are supported (int 14h) Printer services are supported (int 17h) CGA/mono video services are supported (int 10h) ACPI is supported USB legacy is supported LS-120 boot is supported ATAPI Zip drive boot is supported BIOS boot specification is supported Targeted content distribution is supported BIOS Revision: 8.16 Handle 0x0001, DMI type 1, 27 bytes System Information Manufacturer: Supermicro Product Name: X8SIE Version: 0123456789 Serial Number: 0123456789 UUID: 49434D53-0200-9033-2500-33902500D52C Wake-up Type: Power Switch SKU Number: To Be Filled By O.E.M. Family: To Be Filled By O.E.M. Handle 0x0002, DMI type 2, 15 bytes Base Board Information Manufacturer: Supermicro Product Name: X8SIE Version: 0123456789 Serial Number: VM11S61561 Asset Tag: To Be Filled By O.E.M. Features: Board is a hosting board Board is replaceable Location In Chassis: To Be Filled By O.E.M. Chassis Handle: 0x0003 Type: Motherboard Contained Object Handles: 0 Handle 0x0003, DMI type 3, 21 bytes Chassis Information Manufacturer: Supermicro Type: Sealed-case PC Lock: Not Present Version: 0123456789 Serial Number: 0123456789 Asset Tag: To Be Filled By O.E.M. Boot-up State: Safe Power Supply State: Safe Thermal State: Safe Security Status: None OEM Information: 0x00000000 Height: Unspecified Number Of Power Cords: 1 Contained Elements: 0

    Read the article

  • Windows Server (SBS) 2008 - Telephony service won't start (missing permissions)

    - by Uri
    I am running a SBS 2008 server. It's setup as the domain controller for the network. After a reboot, the Telephony service (and all services that depend on it) refuses to start under the Network Service account. The error given is: Error 1297: A privilege that the service requires to function properly does not exist in the service account configuration. You may use the Services Microsoft Management Console (MMC) snap-in (services.msc) and the Local Security Settings MMC snap-in (secpol.msc) to view the service configuration and the account configuration. This has caused all the network services not to be accessible e.g. terminal services, VPN (RRAS), SQL Server instances. The SSH daemon I have running on the box will accept connections only from localhost, but won't respond on the network. After searching around, the only advice I could find was to grant the Network Service account these permissions: Adjust memory quotas for a process Replace a process level token I set those permissions on both the Default Domain Policy and the Default Domain Controller Policy, but it seemingly had no effect. Most of the services will start if I change them to run under the Local System account, but that didn't make them accessible on the network. I even tried removing the Routing and Remote Access Services feature, rebooting and reinstalling it, but the issue remains. Any ideas?

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • JSON and jQuery.ajax

    - by Andreas
    Hello, im trying to use the jQuery UI autocomplete to communitate with a webservice with responseformate JSON, but i am unable to do so. My webservice is not even executed, the path should be correct since the error message does not complain about this. What strikes me is the headers, response is soap but request is json, is it supposed to be like this? Response Headersvisa källkod Content-Type application/soap+xml; charset=utf-8 Request Headersvisa källkod Accept application/json, text/javascript, */* Content-Type application/json; charset=utf-8 The error message i get is as follows (sorry for the huge message, but it might be of importance): soap:ReceiverSystem.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Xml.XmlException: Data at the root level is invalid. Line 1, position 1. at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.Throw(String res, String arg) at System.Xml.XmlTextReaderImpl.ParseRootLevelWhitespace() at System.Xml.XmlTextReaderImpl.ParseDocumentContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Web.Services.Protocols.SoapServerProtocol.SoapEnvelopeReader.Read() at System.Xml.XmlReader.MoveToContent() at System.Web.Services.Protocols.SoapServerProtocol.SoapEnvelopeReader.MoveToContent() at System.Web.Services.Protocols.SoapServerProtocolHelper.GetRequestElement() at System.Web.Services.Protocols.Soap12ServerProtocolHelper.RouteRequest() at System.Web.Services.Protocols.SoapServerProtocol.RouteRequest(SoapServerMessage message) at System.Web.Services.Protocols.SoapServerProtocol.Initialize() at System.Web.Services.Protocols.ServerProtocolFactory.Create(Type type, HttpContext context, HttpRequest request, HttpResponse response, Boolean& abortProcessing) --- End of inner exception stack trace --- This is my code: $('selector').autocomplete({ source: function(request, response) { $.ajax({ url: "../WebService/Member.asmx", dataType: "json", contentType: "application/json; charset=utf-8", type: "POST", data: JSON.stringify({prefixText: request.term}), success: function(data) { alert('success'); }, error: function(XMLHttpRequest, textStatus, errorThrown){ alert('error'); } }) }, minLength: 1, select: function(event, ui) { } }); And my webservice looks like this: [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [System.ComponentModel.ToolboxItem(false)] [ScriptService] public class Member : WebService { [WebMethod(EnableSession = true)] [ScriptMethod(ResponseFormat = ResponseFormat.Json)] public string[] GetMembers(string prefixText) { code code code } } What am i doing wrong? Thanks in advance :)

    Read the article

  • singleton pattern in Windows Activation Service

    - by Joshua
    Hello I have a few WCF services that are currently being self hosted, in a very basic NT Service. I want to expand my application to add provisioning of WCF Services, and updates, as well as isolation (I want each WCF Service to be in its own AppDomain). These WCF Services contain logic that needs to be run on a regular basis, pinging the database, and getting information from external devices so that when a request comes in the data is readily available. I'm thinking about trying out Windows Activation Service, because i really like the provisioning, and isolation that comes with a managed services infrastructure. If I didn't use WAS I would essentially have to write the same code myself. From what I understand though WAS does not really support the model of having a service that is running before someone actually calls a method on the service. the article I read here MSDN Article Link states "That means in essence that out-of-the-box WAS hosting is not something that is really suited for sessionful or singleton services. It is more suitable for stateless per-call services." it does say that "Out of the box" so I'm wondering if anyone has used WAS to host a WCF service that really behaves more like an NT Service (starting and stopping independantly of having a method called upon it). Or any other ideas would be great. I was planning on writting this infrastructure myself, to host WCF services in a custom ServiceHost, and put their execution in a seporate AppDomain, as well as allow for provision of these services after initial installation, along with updates. However, I would MUCH MUCH MUCH rather not own that code if I don't have to. thanks Joshua

    Read the article

  • Call WCF Service Through Javascript, AJAX, or JQuery

    - by obautista
    I created a number of standard WCF Services (Service Contract and Host (svc) are in separate assemblies). I fired up a Web Site in IIS to host the Services (i.e., address is http://services:1000/wcfservices.svc). Then in my Web Site project I added the reference. I am able to call the services normally. I am needed to call some of the services client side. Not sure if I should be looking at articles calling WCF services through AJAX, JQuery, or JSON enabled WCF Services. Can anyone provide any thoughts or experience with configuring as such? Some of the changes I made was adding the following to the Operation Contract: [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "SetFoo")] void SetFoo(string Id); Then this above the implementation of the interface: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] Then in the service webconfig I have this (parens are angle brackets): <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <baseAddressPrefixFilters> <add prefix="http://services:1000/wcfservices.svc/"/>> </baseAddressPrefixFilters> </serviceHostingEnvironment> <serviceHostingEnvironment multipleSiteBindingsEnabled="false" /> Then in the client side I attempted this: <asp:ScriptManagerProxy ID="ScriptManagerProxy1" runat="server"> <compositeScript> <Scripts> <asp:ScriptReference Path="http://Flixsit:1000/FlixsitWebServices.svc" /> </Scripts> </CompositeScript> </asp:ScriptManagerProxy> I am attempting to call the service like this in javascript: wcfservices.SetFoo(string Id); Nothing is working. If it is idea or a better solution to call JSON enable, JQuery, etc....I am willing to make any changes. Thanks for any suggestions/tips provided....

    Read the article

  • SQL 2008 Querying Soap XML

    - by Vince
    I have been trying to process this SOAP XML return using SQL but all I get is NULL or nothing at all. I have tried different ways and pasted them all below. Declare @xmlMsg xml; Set @xmlMsg = '<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <SendWarrantyEmailResponse xmlns="http://Web.Services.Warranty/"> <SendWarrantyEmailResult xmlns="http://Web.Services.SendWarrantyResult"> <WarrantyNumber>120405000000015</WarrantyNumber> <Result>Cannot Send Email to anonymous account!</Result> <HasError>true</HasError> <MsgUtcTime>2012-06-07T01:11:36.8665126Z</MsgUtcTime> </SendWarrantyEmailResult> </SendWarrantyEmailResponse> </soap:Body> </soap:Envelope>'; declare @table table (data xml); insert into @table values (@xmlMsg); select data from @table; WITH xmlnamespaces ('http://schemas.xmlsoap.org/soap/envelope/' as [soap], 'http://Web.Services.Warranty' as SendWarrantyEmailResponse, 'http://Web.Services.SendWarrantyResult' as SendWarrantyEmailResult) SELECT Data.value('(/soap:Envelope[1]/soap:Body[1]/SendWarrantyEmailResponse[1]/SendWarrantyEmailResult[1]/WarrantyNumber[1])[1]','VARCHAR(500)') AS WarrantyNumber FROM @Table ; ;with xmlnamespaces('http://schemas.xmlsoap.org/soap/envelope/' as [soap],'http://Web.Services.Warranty' as SendWarrantyEmailResponse,'http://Web.Services.SendWarrantyResult' as SendWarrantyEmailResult) --select @xmlMsg.value('(/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult/WarrantyNumber)[0]', 'nvarchar(max)') --select T.N.value('.', 'nvarchar(max)') from @xmlMsg.nodes('/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult') as T(N) select @xmlMsg.value('(/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult/HasError)[1]','bit') as test ;with xmlnamespaces('http://schemas.xmlsoap.org/soap/envelope/' as [soap],'http://Web.Services.Warranty' as [SendWarrantyEmailResponse],'http://Web.Services.SendWarrantyResult' as [SendWarrantyEmailResult]) SELECT SendWarrantyEmailResult.value('WarrantyNumber[1]','varchar(max)') AS WarrantyNumber, SendWarrantyEmailResult.value('Result[1]','varchar(max)') AS Result, SendWarrantyEmailResult.value('HasError[1]','bit') AS HasError, SendWarrantyEmailResult.value('MsgUtcTime[1]','datetime') AS MsgUtcTime FROM @xmlMsg.nodes('/soap:Envelope/soap:Body/SendWarrantyEmailResponse/SendWarrantyEmailResult') SendWarrantyEmailResults(SendWarrantyEmailResult)

    Read the article

  • Constructor on type: "Namespace.type" not found.

    - by Nick
    Hello, I am using Castle.Windsor as an IOC. So I am trying to resolve a service type in the constructor of an HTTPHandler. I keep receiving this error, "Constructor on type: "Namespace.type" not found." My configuration has the following entries for service type: IDocumentDirectory <component id="restricted.content.directory" service="org.healthwise.foundations.services.content.IDocumentDirectory, org.healthwise.foundations.services" type="org.healthwise.foundations.services.content.RestrictedLocalizationDocumentDirectory, org.healthwise.foundations.services"> <parameters> <contentDirectory>${content.directory}</contentDirectory> <localizations> <array> <item>en-us</item> <item>es-us</item> </array> </localizations> </parameters> </component> <component id="content.directory" service="org.healthwise.foundations.services.content.IDocumentDirectory, org.healthwise.foundations.services" type="org.healthwise.foundations.services.web.client.WebServiceDocumentDirectory, org.healthwise.foundations.services.web.client"> <parameters> <webServiceURL>#{contentDirectoryWebsiteUrl}</webServiceURL> </parameters> </component> In my new handler the constructor looks like this: public HeartBeatHttpHandler(IDocumentDirectory contentDirectory) { _contentDirectory = contentDirectory; } I have never recieved this error using Castle.Windsor. Can someone explain? Thanks!

    Read the article

  • Active Directory Services: PrincipalContext -- What is the DN of a "container" object?

    - by Ranger Pretzel
    I'm currently trying to authenticate via Active Directory Services using the PrincipalContext class. I would like to have my application authenticate to the Domain using Sealed and SSL contexts. In order to do this, I have to use the following constructor of PrincipalContext (link to MSDN page): public PrincipalContext( ContextType contextType, string name, string container, ContextOptions options ) Specifically, I'm using the constructor as so: PrincipalContext domainContext = new PrincipalContext( ContextType.Domain, domain, container, ContextOptions.Sealing | ContextOptions.SecureSocketLayer); MSDN says about "container": The container on the store to use as the root of the context. All queries are performed under this root, and all inserts are performed into this container. For Domain and ApplicationDirectory context types, this parameter is the distinguished name (DN) of a container object. What is the DN of a container object? How do I find out what my container object is? Can I query the Active Directory (or LDAP) server for this?

    Read the article

  • Is it possible to force an error in an Integration Services data flow to demonstrate its rollback?

    - by Matt
    I have been tasked with demoing how Integration Services handles an error during a data flow to show that no data makes it into the destination. This is an existing package and I want to limit the code changes to the package as much as possible (since this is most likely a one time deal). The scenario that is trying to be understood is a "systemic" failure - the source file disappears midstream, or the file server loses power, etc. I know I can make this happen by having the Error Output of the source set to Failure and introducing bad data but I would like to do something lighter than that. I suppose I could add a Script Transform task and look for a certain value and throw an error but I was hoping someone has come up with something easier / more elegant. Thanks, Matt

    Read the article

  • Where to find great proxy servers for testing GeoIP services?

    - by Andreas
    We would like to test a GeoIP-Service. Therefore we need to go to the site with an IP from another country. There are a lot of free proxy lists like http://nntime.com/proxy-country/ The problem with them is, that only the CoDeen-Proxies are working. But with CoDeen you can't select your country of origin (the same as with TOR). You get redirected to a random proxy in the network. Where to find good proxy server for testing the GeoIP Services? Free proxy servers would be great, but if they cost something small that doesn't matter.

    Read the article

  • WCF Rest services for use with the repository pattern?

    - by mark smith
    Hi there, I am considering moving my Service Layer and my data layer (repository pattern) to a WCF Rest service. So basically i would have my software installed locally (WPF client) which would call the Service Layer that exists via a Rest Service... The service layer would then call my data layer using a WCF Rest Service also OR maybe just call it via the DLL assembly I was hoping to understand what the performance would be like. Currently I have my datalayer and servicelayer installed locally via DLL Assemblies locally on the pc. Also i presume the WCF REST services won't support method overloading hance the same name but with a different signature?? I would really appreciate any feedback anyone can give. Thanks

    Read the article

  • How can I upload a file to a Sharepoint Document Library using Silverlight and client web-services?

    - by pclem12
    Most of the solutions I've come across for Sharepoint doc library uploads use the HTTP "PUT" method, but I'm having trouble finding a way to do this in Silverlight because it has restrictions on the HTTP Methods. I visited this http://msdn.microsoft.com/en-us/library/dd920295(VS.95).aspx to see how to allow PUT in my code, but I can't find how that helps you use an HTTP "PUT". I am using client web-services, so that limits some of the Sharepoint functions available. That leaves me with these questions: Can I do an http PUT in Silverlight? If I can't or there is another better way to upload a file, what is it? Thanks

    Read the article

  • SQL Server Reporting Services 2008: How to set the credentials property properly?

    - by wgpubs
    No matter how I configure the Credentials property I get a 401 exception when I try to Render the report. Here is my (latest) code: var rs = new ReportExecutionService(); rs.Url = "https://myserver/reportserver/reportexecution2005.asmx"; var myCache = new System.Net.CredentialCache(); myCache.Add(new Uri(rs.Url), "kerberos" , new System.Net.NetworkCredential("username", "password", "Domain")); rs.Credentials = myCache; The URL and credentials are all correct. But still getting a 401 when I cal rs.Render(...). The Reporting Services install is sitting on a Windows Server 2008 box and requires integrated authentication. Thanks

    Read the article

  • How to create Web services and link it to Smart device Application?

    - by Crimsonland
    I am a fresh graduate and hired to develop Smart Device Application.They use Data logic Memoir with windows CE 5.0. Even though i have novice skills in programming in vb.net,I just finish my project and applications for Data logic memoir wherein the data has been save to text file or SQL compact server database in the Handheld Device and use Active Sync Connection to pass Data into H/PC to Desktop P.C and Vice Versa.I just shift now to C#.net and start learning it and i just successfully transpose my Smart Device Project from VB to C#. But now i want to Start Projects linking The device to server using Web services but i don't have any idea how to begin?

    Read the article

  • Quality Design for Asynchronous WCF Services Calls in a Middle-Tier and Returning Data to UI Tier

    - by Perplexed
    I have a WPF application with a group of asynchronous WCF service calls all mashed into the code behind, complete with event handlers and everything, that I have to refactor to productionize and maintain. I want to separate concerns here for maintainability and all the other good reasons to do this, but I'm not sure exactly how to achieve this. Anybody have any good ideas on how to do this, or at least some links to put me in the right direction? My thinking: Create an "infrastructure" layer and reference the services there. Move the asynchronous event handlers into this layer. When an update is called, I will bubble up my own event with my own derivation of the EventArgs class that contains the data the UI will need. I'll have a fairly coupled hooking of the UI to the infrastructure layer as it will consume events I fire off upon completion of an asynchronous data call.

    Read the article

  • Mac: How to see list of running network services?

    - by Jordan
    I am writing an application that needs to connect with a running network service on a Mac. Problem is, I have no idea what the service is called or even what port it uses. Is there a way to browse all running network services on my Mac? More info: I am connecting to a MIDI network session (found under 'Audio MIDI settings', present on all OSX installs). Am I correct in thinking this is a network service? I am planning to use NSNetServiceBrowser to locate all local computers running this service. (is this the best way to go about it?) Any help is much appreciated - thanks!

    Read the article

  • iPhone / Android: what protocol stacks do apps use for connecting to centralised services?

    - by Richard
    Hi All, Aplogies for the ignorant question, I have no experience with app development on any mobile platform. Basically what I want to know is what communication protocols do apps typically use for accessing/querying centralised services? E.g if I port a webapp/service to iPhone/Andriod, typically how would I access/query this web service in my app? E.g is it over HTTP, or are there other protocols? Also, presumably the GUI of an app is constructed with Apple/Andriod GUI libraries (in java? cocoa?). Can an app GUI be defined with HTML/javascript like a webpage? Sorry again for the pure noob questions. Thanks

    Read the article

  • Can SQL Server 2008 Express be used for offering commercial web hosting services?

    - by Tony_Henrich
    I read that SQL Server 2008 Express R2 database size limit has increased to 10G. That's good news. Can I use the Express edition of SQL Server to offer web hosting services to the public? Microsoft should be best in answering this but I can't find a clear answer on their site. I am also seeing several Windows web hosting plans include SQL Server as a total package for less than $5/month. I am wondering how they can afford to offer this.

    Read the article

  • Is there a (C#) library that will create feeds for Amazon Marketplace Web Services?

    - by Josh Kodroff
    Does anyone know of a library out there (preferably in C#) that will take classes and generate XML or flat files suitable for feeds to Amazon Marketplace Web Services? In other words, I'd like to do something like this: var feed = new AmazonProductFeed(); var list = new AmazonProductList(); var product1 = new AmazonProduct(); product1.Name = "Product 1"; list.Add(product1); var product2 = new AmazonProduct(); product2.Name = "Product 2"; list.Add(product2); feed.Products = list; // spits out XML compliant with Amazon's schema Console.Write(feed.ToXml()); It looks like the only code Amazon provides are wrappers for the web service itself and the directory-based transport utility (AMTU).

    Read the article

  • Five Ways Enterprise 2.0 Can Transform Your Business - Q&A from the Webcast

    - by [email protected]
    A few weeks ago, Vince Casarez and I presented with KMWorld on the Five Ways Enterprise 2.0 Can Transform Your Business. It was an enjoyable, interactive webcast in which Vince and I discussed the ways Enterprise 2.0 can transform your business and more importantly, highlighted key customer examples of how to do so. If you missed the webcast, you can catch a replay here. We had a lot of audience participation in some of the polls we conducted and in the Q&A session. We weren't able to address all of the questions during the broadcast, so we attempted to answer them here: Q: Which area within your firm focuses on Web 2.0? Meaning, do you find new departments developing just to manage the web 2.0 (Twitter, Facebook, etc.) user experience or are you structuring current departments? A: There are three distinct efforts within Oracle. The first is around delivery of these Web 2.0 services for enterprise deployments. This is the focus of the WebCenter team. The second effort is injecting these Web 2.0 services into use cases that drive the different enterprise applications. This effort is focused on how to manage these external services and bring them into a cohesive flow for marketing programs, customer care, and purchasing. The third effort is how we consume these services internally to enhance Oracle's business delivery. It leverages the technologies and use cases of the first two but also pushes the envelope with regards to future directions of these other two areas. Q: In a business, Web 2.0 is mostly like action logs. How can we leverage the official process practice versus the logs of a recent action? Example: a system configuration modified last night on a call out versus the official practice that everybody would use in the morning.A: The key thing to remember is that most Web 2.0 actions / activity streams today are based on collaboration and communication type actions. At least with public social sites like Facebook and Twitter. What we're delivering as part of the WebCenter Suite are not just these types of activities but also enterprise application activities. These enterprise application activities come from different application modules: purchasing, HR, order entry, sales opportunity, etc. The actions within these systems are normally tied to a business object or process: purchase order/customer, employee or department, customer and supplier, customer and product, respectively. Therefore, the activities or "logs" as you name them are able to be "typed" so that as a viewer, you can filter or decide to see only certain types of information. In your example, you could have a view that only showed you recent "configuration" changes and this could be right next to a view that showed off the items to be watched every morning. Q: It's great to hear about customers using the software but is there any plan for future webinars to show what the products/installs look like? That would be very helpful.A: We don't have a webinar planned to show off the install process. However, we have a viewlet that's posted on Oracle Technology Network. You can see it here:http://www.oracle.com/technetwork/testcontent/wcs-install-098014.htmlAnd we've got excellent documentation that walks you through the steps here:http://download.oracle.com/docs/cd/E14571_01/install.1111/e12001/install.htmAnd there's a whole set of demos and examples of what WebCenter can do at this URL:http://www.oracle.com/technetwork/middleware/webcenter/release11-demos-097468.html Q: How do you anticipate managing metadata across the enterprise to make content findable?A: We need to first make sure we are all talking about the same thing when we use a word like "metadata". Here's why...  For a developer, metadata means information that describes key elements of the portal or application and what the portal or application can do. For content systems, metadata means key terms that provide a taxonomy or folksonomy about the information that is being indexed, ordered, and managed. For business intelligence systems, metadata means key terms that provide labels to groups of data that most non-mathematicians need to understand. And for SOA, metadata means labels for parts of the processes that business owners should understand that connect development terminology. There are also additional requirements for metadata to be available to the team building these new solutions as well as requirements to make this metadata available to the running system. These requirements are often separated by "design time" and "run time" respectively. So clearly, a general goal of managing metadata across the enterprise is very challenging. We've invested a huge amount of resources around Oracle Metadata Services (MDS) to be able to provide a more generic system for all of these elements. No other vendor has anything like this technology foundation in their products. This provides a huge benefit to our customers as they will now be able to find content, processes, people, and information from a common set of search interfaces with consistent enterprise wide results. Q: Can you give your definition of terms as to document and content, please?A: Content applies to a broad category of information from Word documents, presentations and reports through attachments to invoices and/or purchase orders. Content is essentially any type of digital asset including images, video, and voice. A document is just one type of content. Q: Do you have special integration tools to realize an interaction between UCM and WebCenter Spaces/Services?A: Yes, we've dedicated a whole team of engineers to exploit the key features of Oracle UCM within WebCenter.  While ensuring that WebCenter can connect to other non-Oracle systems, we've made sure that with the combined set of Oracle technology, no other solution can match the combined power and integration.  This is part of the Oracle Fusion Middleware strategy which is to provide best in class capabilities for Content and Portals.  When combined together, the synergy between the two products enables users to quickly add capabilities when they are needed.  For example, simple document sharing is part of the combined product offering, but if legal discovery or archiving is required, Oracle UCM product includes these capabilities that can be quickly added.  There's no need to move content around or add another system to support this, it's just a feature that gets turned on within Oracle UCM. Q: All customers have some interaction with their applications and have many older versions, how do you see some of these new Enterprise 2.0 capabilities adding value to existing enterprise application deployments?A: Just as Service Oriented Architectures allowed for connecting the processes of different applications systems to work together, there's a need for a similar approach with regards to these enterprise 2.0 capabilities. Oracle WebCenter is built on a core architecture that allows for SOA of these Enterprise 2.0 services so that one set of scalable services can be used and integrated directly into any type of application. In this way, users can get immediate value out of the Enterprise 2.0 capabilities without having to wait for the next major release or upgrade. These centrally managed WebCenter services expose a set of standard interfaces that make it extremely easy to add them into existing applications no matter what technology the application has been implemented. Q: We've heard about Oracle Next Generation applications called "Fusion Applications", can you tell me how all this works together?A: Oracle WebCenter powers the core collaboration and social computing services found within Fusion Applications. It is the core user experience technology for how all the application screens have been implemented. And the core concept of task flows allows for all the Fusion Applications modules to be adaptable and composable by business users and IT without needing to be a professional developer. Oracle WebCenter is at the heart of the new Fusion Applications. In addition, the same patterns and technologies are now being added to the existing applications including JD Edwards, Siebel, Peoplesoft, and eBusiness Suite. The core technology enables all these customers to have a much smoother upgrade path to Fusion Applications. They get immediate benefits of injecting new user interactions into their existing applications without having to completely move to Fusion Applications. And then when the time comes, their users will already be well versed in how the new capabilities work. Q: Does any of this work with non Oracle software? Other databases? Other application servers? etc.A: We have made sure that Oracle WebCenter delivers the broadest set of development choices so that no matter what technology you developers are using, WebCenter capabilities can be quickly and easily added to the site or application. In addition, we have certified Oracle WebCenter to run against non-Oracle databases like DB2 and SQLServer. We have stated plans for certification against MySQL as well. Later in CY 2011, Oracle will provide certification on non-Oracle application servers such as WebSphere and JBoss. Q: How do we balance User and IT requirements in regards to Enterprise 2.0 technologies?A: Wrong decisions are often made because employee knowledge is not tapped efficiently and opportunities to innovate are often missed because the right people do not work together. Collaboration amongst workers in the right business context is critical for success. While standalone Enterprise 2.0 technologies can improve collaboration for collaboration's sake, using social collaboration tools in the context of business applications and processes will improve business responsiveness and lead companies to a more competitive position. As these systems become more mission critical it is essential that they maintain the highest level of performance and availability while scaling to support larger communities. Q: What are the ways in which Enterprise 2.0 can improve business responsiveness?A: With a wide range of Enterprise 2.0 tools in the marketplace, CIOs need to deploy solutions that will meet the requirements from users as well as address the requirements from IT. Workers want a next-generation user experience that is personalized and aggregates their daily tools and tasks, while IT needs to ensure the solution is secure, scalable, flexible, reliable and easily integrated with existing systems. An open and integrated approach to deploying portals, content management, and collaboration can enhance your business by addressing both the needs of knowledge workers for better information and the IT mandate to conserve resources by simplifying, consolidating and centralizing infrastructure and administration.  

    Read the article

  • Windows Azure – Write, Run or Use Software

    - by BuckWoody
    Windows Azure is a platform that has you covered, whether you need to write software, run software that is already written, or Install and use “canned” software whether you or someone else wrote it. Like any platform, it’s a set of tools you can use where it makes sense to solve a problem. The primary location for Windows Azure information is located at http://windowsazure.com. You can find everything there from the development kits for writing software to pricing, licensing and tutorials on all of that. I have a few links here for learning to use Windows Azure – although it’s best if you focus not on the tools, but what you want to solve. I’ve got it broken down here into various sections, so you can quickly locate things you want to know. I’ll include resources here from Microsoft and elsewhere – I use these same resources in the Architectural Design Sessions (ADS) I do with my clients worldwide. Write Software Also called “Platform as a Service” (PaaS), Windows Azure has lots of components you can use together or separately that allow you to write software in .NET or various Open Source languages to work completely online, or in partnership with code you have on-premises or both – even if you’re using other cloud providers. Keep in mind that all of the features you see here can be used together, or independently. For instance, you might only use a Web Site, or use Storage, but you can use both together. You can access all of these components through standard REST API calls, or using our Software Development Kit’s API’s, which are a lot easier. In any case, you simply use Visual Studio, Eclipse, Cloud9 IDE, or even a text editor to write your code from a Mac, PC or Linux.  Components you can use: Azure Web Sites: Windows Azure Web Sites allow you to quickly write an deploy websites, without setting a Virtual Machine, installing a web server or configuring complex settings. They work alone, with other Windows Azure Web Sites, or with other parts of Windows Azure. Web and Worker Roles: Windows Azure Web Roles give you a full stateless computing instance with Internet Information Services (IIS) installed and configured. Windows Azure Worker Roles give you a full stateless computing instance without Information Services (IIS) installed, often used in a "Services" mode. Scale-out is achieved either manually or programmatically under your control. Storage: Windows Azure Storage types include Blobs to store raw binary data, Tables to use key/value pair data (like NoSQL data structures), Queues that allow interaction between stateless roles, and a relational SQL Server database. Other Services: Windows Azure has many other services such as a security mechanism, a Cache (memcacheD compliant), a Service Bus, a Traffic Manager and more. Once again, these features can be used with a Windows Azure project, or alone based on your needs. Various Languages: Windows Azure supports the .NET stack of languages, as well as many Open-Source languages like Java, Python, PHP, Ruby, NodeJS, C++ and more.   Use Software Also called “Software as a Service” (SaaS) this often means consumer or business-level software like Hotmail or Office 365. In other words, you simply log on, use the software, and log off – there’s nothing to install, and little to even configure. For the Information Technology professional, however, It’s not quite the same. We want software that provides services, but in a platform. That means we want things like Hadoop or other software we don’t want to have to install and configure.  Components you can use: Kits: Various software “kits” or packages are supported with just a few clicks, such as Umbraco, Wordpress, and others. Windows Azure Media Services: Windows Azure Media Services is a suite of services that allows you to upload media for encoding, processing and even streaming – or even one or more of those functions. We can add DRM and even commercials to your media if you like. Windows Azure Media Services is used to stream large events all the way down to small training videos. High Performance Computing and “Big Data”: Windows Azure allows you to scale to huge workloads using a few clicks to deploy Hadoop Clusters or the High Performance Computing (HPC) nodes, accepting HPC Jobs, Pig and Hive Jobs, and even interfacing with Microsoft Excel. Windows Azure Marketplace: Windows Azure Marketplace offers data and programs you can quickly implement and use – some free, some for-fee.   Run Software Also known as “Infrastructure as a Service” (IaaS), this offering allows you to build or simply choose a Virtual Machine to run server-based software.  Components you can use: Persistent Virtual Machines: You can choose to install Windows Server, Windows Server with Active Directory, with SQL Server, or even SharePoint from a pre-configured gallery. You can configure your own server images with standard Hyper-V technology and load them yourselves – and even bring them back when you’re done. As a new offering, we also even allow you to select various distributions of Linux – a first for Microsoft. Windows Azure Connect: You can connect your on-premises networks to Windows Azure Instances. Storage: Windows Azure Storage can be used as a remote backup, a hybrid storage location and more using software or even hardware appliances.   Decision Matrix With all of these options, you can use Windows Azure to solve just about any computing problem. It’s often hard to know when to use something on-premises, in the cloud, and what kind of service to use. I’ve used a decision matrix in the last couple of years to take a particular problem and choose the proper technology to solve it. It’s all about options – there is no “silver bullet”, whether that’s Windows Azure or any other set of functions. I take the problem, decide which particular component I want to own and control – and choose the column that has that box darkened. For instance, if I have to control the wiring for a solution (a requirement in some military and government installations), that means the “Networking” component needs to be dark, and so I select the “On Premises” column for that particular solution. If I just need the solution provided and I want no control at all, I can look as “Software as a Service” solutions. Security, Pricing, and Other Info  Security: Security is one of the first questions you should ask in any distributed computing environment. We have certification info, coding guidelines and more, even a general “Request for Information” RFI Response already created for you.   Pricing: Are there licenses? How much does this cost? Is there a way to estimate the costs in this new environment? New Features: Many new features were added to Windows Azure - a good roundup of those changes can be found here. Support: Software Support on Virtual Machines, general support.    

    Read the article

  • Howto WCF Service HTTPS Binding and Endpoint Configuration in IIS with Load Balancer?

    - by Mike G
    We have a WCF service that is being hosted on a set of 12 machines. There is a load balancer that is a gateway to these machines. Now the site is setup as SSL; as in a user accesses it through using an URL with https. I know this much, the URL that addresses the site is https, but none of the servers has a https binding or is setup to require SSL. This leads me to believe that the load balancer handles the https and the connection from the balancer to the servers are unencrypted (this takes place behind the firewall so no biggie there). The problem we're having is that when a Silverlight client tries to access a WCF service it is getting a "Not Found" error. I've set up a test site along with our developer machines and have made sure that the bindings and endpoints in the web.config work with the client. It seems to be the case in the production environment that we get this error. Is there anything wrong with the following web.config? Should we be setting up how https is handled in a different manner? We're at a loss on this currently since I've tried every programmatic solution with endpoints and bindings. None of the solutions I have found deal with a load balancer in the manner we're dealing. Web.config service model info: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="SecureCRMCustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> <binding name="SecureAACustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> </customBinding> <mexHttpsBinding> <binding name="SecureMex" /> </mexHttpsBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <!--Defines the services to be used in the application--> <services> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior" name="TradePMR.OMS.Framework.Services.CRM.CRMService"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureCRMCustomBinding" contract="TradePMR.OMS.Framework.Services.CRM.CRMService" name="SecureCRMEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior" name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureAACustomBinding" contract="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation" name="SecureAAEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> </configuration> The ServiceReferences.ClientConfig looks like this: <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="StandardAAEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureAAEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="StandardCRMEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureCRMEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> </customBinding> </bindings> <client> <endpoint address="https://Service2.svc" binding="customBinding" bindingConfiguration="SecureAAEndpoint" contract="AccountAggregationService.AccountAggregation" name="SecureAAEndpoint" /> <endpoint address="https://Service1.svc" binding="customBinding" bindingConfiguration="SecureCRMEndpoint" contract="CRMService.CRMService" name="SecureCRMEndpoint" /> </client> </system.serviceModel> </configuration> (The addresses are of no consequence since those are dynamically built so that they will point to a dev's machine or to the production server)

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >