Search Results

Search found 22912 results on 917 pages for 'hosted service'.

Page 14/917 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Choosing between .NET Service Bus Queues vs Azure Queue Service

    - by ChrisV
    Just a quick question regarding an Azure application. If I have a number of Web and Worker roles that need to communicate, documentation says to use the Azure Queue Service. However, I've just read that the new .NET Service Bus now also offers queues. These look to be more powerful as they appear to offer a much more detailed API. Whilst the .NSB looks more interesting it has a couple of issues that make me wary of using it in distributed application. (for example, Queue Expiration... if I cannot guarantee that a queue will be renewed on time I may lose it all!). Has anyone had any experience using either of these two technologies and could give any advice on when to choose one over the other. I suspect that whilst the service bus looks more powerful, as my use case is really just enabling Web/Worker roles to communicate between each other, that the Azure Queue Service is what I'm after. But I'm just really looking for confirmation of that before progamming myself in to a corner :-) Thanks in advance. UPDATE Have read up about the two systems over the break. It defo looks like .NET service bus is more specifically designed for integrating systems rather than providing a general purpose reliable messaging system. Azure Queues are distributed and so reliable and scalable in a way that .NSB queues are not and so more suitable for code hosted within Azure itself. Thanks for the responses.

    Read the article

  • Following the Thread in OSB

    - by Antony Reynolds
    Threading in OSB The Scenario I recently led an OSB POC where we needed to get high throughput from an OSB pipeline that had the following logic: 1. Receive Request 2. Send Request to External System 3. If Response has a particular value   3.1 Modify Request   3.2 Resend Request to External System 4. Send Response back to Requestor All looks very straightforward and no nasty wrinkles along the way.  The flow was implemented in OSB as follows (see diagram for more details): Proxy Service to Receive Request and Send Response Request Pipeline   Copies Original Request for use in step 3 Route Node   Sends Request to External System exposed as a Business Service Response Pipeline   Checks Response to Check If Request Needs to Be Resubmitted Modify Request Callout to External System (same Business Service as Route Node) The Proxy and the Business Service were each assigned their own Work Manager, effectively giving each of them their own thread pool. The Surprise Imagine our surprise when, on stressing the system we saw it lock up, with large numbers of blocked threads.  The reason for the lock up is due to some subtleties in the OSB thread model which is the topic of this post.   Basic Thread Model OSB goes to great lengths to avoid holding on to threads.  Lets start by looking at how how OSB deals with a simple request/response routing to a business service in a route node. Most Business Services are implemented by OSB in two parts.  The first part uses the request thread to send the request to the target.  In the diagram this is represented by the thread T1.  After sending the request to the target (the Business Service in our diagram) the request thread is released back to whatever pool it came from.  A multiplexor (muxer) is used to wait for the response.  When the response is received the muxer hands off the response to a new thread that is used to execute the response pipeline, this is represented in the diagram by T2. OSB allows you to assign different Work Managers and hence different thread pools to each Proxy Service and Business Service.  In out example we have the “Proxy Service Work Manager” assigned to the Proxy Service and the “Business Service Work Manager” assigned to the Business Service.  Note that the Business Service Work Manager is only used to assign the thread to process the response, it is never used to process the request. This architecture means that while waiting for a response from a business service there are no threads in use, which makes for better scalability in terms of thread usage. First Wrinkle Note that if the Proxy and the Business Service both use the same Work Manager then there is potential for starvation.  For example: Request Pipeline makes a blocking callout, say to perform a database read. Business Service response tries to allocate a thread from thread pool but all threads are blocked in the database read. New requests arrive and contend with responses arriving for the available threads. Similar problems can occur if the response pipeline blocks for some reason, maybe a database update for example. Solution The solution to this is to make sure that the Proxy and Business Service use different Work Managers so that they do not contend with each other for threads. Do Nothing Route Thread Model So what happens if there is no route node?  In this case OSB just echoes the Request message as a Response message, but what happens to the threads?  OSB still uses a separate thread for the response, but in this case the Work Manager used is the Default Work Manager. So this is really a special case of the Basic Thread Model discussed above, except that the response pipeline will always execute on the Default Work Manager.   Proxy Chaining Thread Model So what happens when the route node is actually calling a Proxy Service rather than a Business Service, does the second Proxy Service use its own Thread or does it re-use the thread of the original Request Pipeline? Well as you can see from the diagram when a route node calls another proxy service then the original Work Manager is used for both request pipelines.  Similarly the response pipeline uses the Work Manager associated with the ultimate Business Service invoked via a Route Node.  This actually fits in with the earlier description I gave about Business Services and by extension Route Nodes they “… uses the request thread to send the request to the target”. Call Out Threading Model So what happens when you make a Service Callout to a Business Service from within a pipeline.  The documentation says that “The pipeline processor will block the thread until the response arrives asynchronously” when using a Service Callout.  What this means is that the target Business Service is called using the pipeline thread but the response is also handled by the pipeline thread.  This implies that the pipeline thread blocks waiting for a response.  It is the handling of this response that behaves in an unexpected way. When a Business Service is called via a Service Callout, the calling thread is suspended after sending the request, but unlike the Route Node case the thread is not released, it waits for the response.  The muxer uses the Business Service Work Manager to allocate a thread to process the response, but in this case processing the response means getting the response and notifying the blocked pipeline thread that the response is available.  The original pipeline thread can then continue to process the response. Second Wrinkle This leads to an unfortunate wrinkle.  If the Business Service is using the same Work Manager as the Pipeline then it is possible for starvation or a deadlock to occur.  The scenario is as follows: Pipeline makes a Callout and the thread is suspended but still allocated Multiple Pipeline instances using the same Work Manager are in this state (common for a system under load) Response comes back but all Work Manager threads are allocated to blocked pipelines. Response cannot be processed and so pipeline threads never unblock – deadlock! Solution The solution to this is to make sure that any Business Services used by a Callout in a pipeline use a different Work Manager to the pipeline itself. The Solution to My Problem Looking back at my original workflow we see that the same Business Service is called twice, once in a Routing Node and once in a Response Pipeline Callout.  This was what was causing my problem because the response pipeline was using the Business Service Work Manager, but the Service Callout wanted to use the same Work Manager to handle the responses and so eventually my Response Pipeline hogged all the available threads so no responses could be processed. The solution was to create a second Business Service pointing to the same location as the original Business Service, the only difference was to assign a different Work Manager to this Business Service.  This ensured that when the Service Callout completed there were always threads available to process the response because the response processing from the Service Callout had its own dedicated Work Manager. Summary Request Pipeline Executes on Proxy Work Manager (WM) Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. Route Node Request sent using Proxy WM Thread Proxy WM Thread is released before getting response Muxer is used to handle response Muxer hands off response to Business Service (BS) WM Response Pipeline Executes on Routed Business Service WM Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. No Route Node (Echo functionality) Proxy WM thread released New thread from the default WM used for response pipeline Service Callout Request sent using proxy pipeline thread Proxy thread is suspended (not released) until the response comes back Notification of response handled by BS WM thread so limited by setting of that WM.  If no WM specified then uses WLS default WM. Note this is a very short lived use of the thread After notification by callout BS WM thread that thread is released and execution continues on the original pipeline thread. Route/Callout to Proxy Service Request Pipeline of callee executes on requestor thread Response Pipeline of caller executes on response thread of requested proxy Throttling Request message may be queued if limit reached. Requesting thread is released (route node) or suspended (callout) So what this means is that you may get deadlocks caused by thread starvation if you use the same thread pool for the business service in a route node and the business service in a callout from the response pipeline because the callout will need a notification thread from the same thread pool as the response pipeline.  This was the problem we were having. You get a similar problem if you use the same work manager for the proxy request pipeline and a business service callout from that request pipeline. It also means you may want to have different work managers for the proxy and business service in the route node. Basically you need to think carefully about how threading impacts your proxy services. References Thanks to Jay Kasi, Gerald Nunn and Deb Ayers for helping to explain this to me.  Any errors are my own and not theirs.  Also thanks to my colleagues Milind Pandit and Prasad Bopardikar who travelled this road with me. OSB Thread Model Great Blog Post on Thread Usage in OSB

    Read the article

  • Error installing a .NET Windows service with InstallUtil

    - by norlando
    I keep getting the error below when every I try to use the InstallUtil to install my .NET service. I put "installutil myservice.exe" into command prompt and then get the error. Any idea of what the problem is? Do I need to add another parameter? An exception occurred during the Install phase. System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security.

    Read the article

  • windows service application run fine on windows XP but crashes on windows7

    - by Abbas Siddiqui
    I am sorry If my question asked before, I search extensively but didn't found. If present please post the link of that question. I have developed windows service that works fine on windows xp , when I installed it on windows7 it installed and works fine for few minutes, after that is crashes and gives the following error message. has stopped working windows is checking for the solution to the problem. the log entry is as follows Fault bucket 1155193276, type 5 Event Name: CLR20r3 Response: Not available Cab Id: 0 Problem signature: P1: windowsserviceapp.exe P2: 1.0.0.0 P3: 4bf29a85 P4: System.Windows.Forms P5: 2.0.0.0 P6: 4a275ebd P7: 16cf P8: 159 P9: System.ComponentModel.Win32 P10: Attached files: C:\Users\DELL\AppData\Local\Temp\WERF98D.tmp.WERInternalMetadata.xml These files may be available here: C:\Users\DELL\AppData\Local\Microsoft\Windows\WER\ReportArchive\AppCrash_windowsserviceap_89ea5da5168ff1535681aa613b5f7bf2b1636dc_111d24f1 Analysis symbol: Rechecking for solution: 0 Report Id: 24dc8c83-62a1-11df-b1ee-00271352d813

    Read the article

  • Process Oracle OER Events using a simple Web Service

    - by Bob Webster
    This post provides an example of a simple web service that processes Oracle Enterprise Repository (OER) Events. The service receives events from OER and utilizes the OER REX API to implement simple OER automations for selected event types. The web service example implements the following: When a new Asset is Submitted to OER: The Asset is automatically Accepted by a defined user. When an Asset is Accepted: The Asset is automatically assigned  to a defined user for review. If the accepted asset is of type Service The Version meta data attribute is set based on the version id contained in the suffix of the Service Namespace.      When an Asset is Registered: If the registered Asset is of type Service The related Assets ( Interface and Endpoint are also automatically registered. The sample web service is not intended to replace the out of the Box OER BPM Based workflows, but the service can be utilized in cases where only simple automation is required and the developer has a Java skill set. The service is a lightweight web application that can be easily deployed to the same server as OER or on a different server. Read the complete post here

    Read the article

  • windows service log on as user a/c on different PC on same workgroup

    - by maruti
    trying to run a service (logon as admin@PC2) from PC1, when both are in work-group fails. why could this happen? OS is win-2003 and please let me know if any windows remote services have to be turned on or firewall configuration? does having PC's on same workgroup help? let me clarify the question: I am unable to see other computers from "Services Logon Tab select User" Object types available are only "users, built in security principals" Location is only local computer. But this is available from mmc console..add snap in how can this be available on services control panel?

    Read the article

  • Does anyone provide a Skype connection service?

    - by Runc
    Is there a way of offering Skype access to incoming calls while keeping all telephony traffic over our chosen business telephony provision? Is there a 3rd party who can route incoming Skype calls to our telephone system? The business has had requests from contacts wanting to call us via Skype, but we want to keep all telephony via our PBX and phone lines as our geographic location limits our available internet bandwidth. We also prevent installation of non-standard applications on desktops and do not want to add Skype to our build. I was wondering if there were any 3rd parties that provide a connection service that would allow our contacts to call via Skype and us receive the calls via our phone system.

    Read the article

  • installing a script as startup service in ubuntu

    - by Jibin
    I have a script openerp-server.py in ~/openerp/stable6/server/bin/.I want it to be run at startup.(As a service or not - I don't know the difference) These are the steps I followed 1 Created a script 'openerp-server' with the following lines in /etc/init.d/ #!/bin/sh cd ~/openerp/stable6/server/bin/ exec /usr/bin/python ./openerp-server.py $@ 2 Made the script executable by using the following command sudo chmod +x /etc/init.d/openerp-server 3 Made the link run on startup by using the following command sudo update-rc.d openerp-server I checked using sysv-rc-conf.And openerp-server was selected for run level 2,3,4,5. Now after restarting I checked if the openerp-server.py is running, it was not running. Please help.

    Read the article

  • OSB, Service Callouts and OQL

    - by Sabha
    Oracle Fusion Middleware customers use Oracle Service Bus (OSB) for virtualizing Service endpoints and implementing stateless service orchestrations. Behind the performance and speed of OSB, there are a couple of key design implementations that can affect application performance and behavior under heavy load. One of the heavily used feature in OSB is the Service Callout pipeline action for message enrichment and invoking multiple services as part of one single orchestration. Overuse of this feature, without understanding its internal implementation, can lead to serious problems. This series will delve into OSB internals, the problem associated with usage of Service Callout under high loads, diagnosing it via thread dump and heap dump analysis using tools like ThreadLogic and OQL (Object Query Language) and resolving it. The first section in the series will mainly cover the threading model used internally by OSB for implementing Route Vs. Service Callouts. The second section of the "OSB, Service Callouts and OQL" blog posting will delve into thread dump analysis of OSB server and detecting threading issues relating to Service Callout and using Heap Dump and OQL to identify the related Proxies and Business services involved. The final section of the series will focus on the corrective action to avoid Service Callout related OSB serer hangs. Before we dive into the solution, we need to briefly discus about Work Managers in WLS. Please refer to the blog posting for more details.

    Read the article

  • Cannot start `Routing and Remote Access Service` and it's dependencies

    - by ahmadali shafiee
    I tried to start Routing and Remote Access Service but I've got an error says the dependency service or group failed to start then I tried to start Remote Access Connection Manager (one of RRAS's dependencies) and the error way same. then I tried to start The Secure Socket Tuning Protocol Service but there was an error says that the the service started then stopped! the errors form event log is here: The Remote Access Connection Manager service depends on the Secure Socket Tunneling Protocol Service service which failed to start because of the following error: The operation completed successfully. The Secure Socket Tunneling Protocol Service service entered the stopped state. The Routing and Remote Access service depends on the Remote Access Connection Manager service which failed to start because of the following error: The dependency service or group failed to start. sort by date Does anyone know how can I resolve the problem?

    Read the article

  • SQL SERVER – SQL Server 2008 with Service Pack 2

    - by pinaldave
    SQL Server 2008 Service Pack 2 has been already released earlier. I suggest that all of you who are running SQL Server 2008 I suggest you updated to SQL Server 2008 Service Pack 2. Download SQL Server 2008 – Service Pack 2 from here. Please note, this is not SQL Server 2008 R2 but it is SQL Server 2008 – Service Pack 2. Test Lab Guide of sQL Server 2008 with Service Pack 2 is also released by Microsoft. This document contains an introduction to SQL Server 2008 with Service Pack 2 and step-by-step instructions for extending the Base Configuration test lab to include a SQL Server 2008 with Service Pack 2 server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • WCF Windows Service TimeOut

    - by rmdussa
    I have a client application developed in .net seding a request to wcf service and supposed to send reponse .if execution time with in 1 minute,there is no error,if it exceeds 1 minute the error is Inner exception: This request operation sent to net.tcp://localhost:18001/PitToPort/2008/01/30/StockpileService/tcp did not receive a reply within the configured timeout (00:01:00). The time allotted to this operation may have been a portion of a longer timeout. This may be because the service is still processing the operation or because the service was unable to send a reply message. Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client how to increase the time out and how?What is the best solution.

    Read the article

  • Running msiexec from a service (Local System account)

    - by Jarrod
    We are working on an update system for our software. The updater should in the background as a service, and when an update is available, downloads and installs it. We need the service to install the update since the msi requires elevation to run, but some of our clients will be restricted users. The MSI is a WIX MSI and does a Major Upgrade when run. The problem is, the update does not seem to work when ran from our service. I can see msiexec run, and it returns successfully, but seems to make no changes to the system. The same command, when run from my user account works as expected. Is there some caveat to running msiexec from a Local System service? We are simply doing: System.Diagnostics.Process.Start("msiexec.exe", arguments);

    Read the article

  • Updating AjaxToolKit Breaks WCF Service for AutoCompleteExtender

    - by Rob Jones
    I had a perfectly working WCF service that I used for a custom AutoCompleteExtender but when I updated the AjaxControlToolKit.dll the service no longer works. I changed nothing else but removing the old DLL and adding the new one. Is there something else I need to update that I am missing? I can access the service so I know it's working properly but the methods are just not getting called by the AutoCompleteExtender.

    Read the article

  • The IOC "child" container / Service Locator

    - by Mystagogue
    DISCLAIMER: I know there is debate between DI and service locator patterns. I have a question that is intended to avoid the debate. This question is for the service locator fans, who happen to think like Fowler "DI...is hard to understand...on the whole I prefer to avoid it unless I need it." For the purposes of my question, I must avoid DI (reasons intentionally not given), so I'm not trying to spark a debate unrelated to my question. QUESTION: The only issue I might see with keeping my IOC container in a singleton (remember my disclaimer above), is with the use of child containers. Presumably the child containers would not themselves be singletons. At first I thought that poses a real problem. But as I thought about it, I began to think that is precisely the behavior I want (the child containers are not singletons, and can be Disposed() at will). Then my thoughts went further into a philosophical realm. Because I'm a service locator fan, I'm wondering just how necessary the notion of a child container is in the first place. In a small set of cases where I've seen the usefulness, it has either been to satisfy DI (which I'm mostly avoiding anyway), or the issue was solvable without recourse to the IOC container. My thoughts were partly inspired by the IServiceLocator interface which doesn't even bother to list a "GetChildContainer" method. So my question is just that: if you are a service locator fan, have you found that child containers are usually moot? Otherwise, when have they been essential? extra credit: If there are other philosophical issues with service locator in a singleton (aside from those posed by DI advocates), what are they?

    Read the article

  • Hosting a WCF service in a Windows service. Works, but I can't reach it from Silverlight.

    - by BaBu
    I've made me an application hosted WCF service for my Silverlight application to consume. When I host the WCF service in a forms application everything works fine. But when I host my WCF service in a windows service (as explained brilliantly here) I get the dreaded 'NotFound' Web Exception when I call it from Silverlight. The 'WCF Test Client' tool does not complain. I'm' exposing a decent clientaccesspolicy.xml at root. Yet Silverlight won't have it. I'm stumped. Anybody have an idea about what could be going on here?

    Read the article

  • Multiple Denial of Service vulnerabilities in libpng

    - by chandan
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2007-5266 Denial of Service (DoS) vulnerability 4.3 PNG reference library (libpng) Solaris 10 SPARC: 137080-03 X86: 137081-03 Solaris 9 SPARC: 139382-02 114822-06 X86: 139383-02 Solaris 8 SPARC: 114816-04 X86: 114817-04 CVE-2007-5267 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5268 Denial of Service (DoS) vulnerability 4.3 CVE-2007-5269 Denial of Service (DoS) vulnerability 5.0 CVE-2008-1382 Denial of Service (DoS) vulnerability 7.5 CVE-2008-3964 Denial of Service (DoS) vulnerability 4.3 CVE-2009-0040 Denial of Service (DoS) vulnerability 6.8 This notification describes vulnerabilities fixed in third-party components that are included in Sun's product distribution.Information about vulnerabilities affecting Oracle Sun products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Creating a service (SERVICE_ACCEPT_SESSIONCHANGE)

    - by Ron
    Hi there, I am trying to create a service following the example documented in the link below: http://msdn.microsoft.com/en-us/library/bb540475(v=VS.85).aspx What I am interested in is to be able to catch user "lock" and "unlock" workstation events. Using the code on from the example provided, I modified the following: Line 15: Original: VOID WINAPI SvcCtrlHandler( DWORD ); Modified: DWORD WINAPI SvcCtrlHandler( DWORD, DWORD, LPVOID, LPVOID ); Line 141: Original: gSvcStatusHandle = RegisterServiceCtrlHandler( SVCNAME, SvcCtrlHandler); Modified: gSvcStatusHandle = RegisterServiceCtrlHandlerEx( SVCNAME, SvcCtrlHandler, NULL); Line 244: Original: SvcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP; Modified: gSvcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP|SERVICE_ACCEPT_SESSIONCHANGE; Line 266: Original: VOID WINAPI SvcCtrlHandler( DWORD dwCtrl ) { // Handle the requested control code. switch(dwCtrl) { case SERVICE_CONTROL_STOP: ReportSvcStatus(SERVICE_STOP_PENDING, NO_ERROR, 0); // Signal the service to stop. SetEvent(ghSvcStopEvent); ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); return; case SERVICE_CONTROL_INTERROGATE: break; default: break; } }` Modified: DWORD WINAPI SvcCtrlHandler( DWORD dwControl, DWORD dwEventType, LPVOID lpEventData, LPVOID lpContext ) { DWORD dwErrorCode = NO_ERROR; switch(dwControl) { case SERVICE_CONTROL_STOP: ReportSvcStatus(SERVICE_STOP_PENDING, NO_ERROR, 0); // Signal the service to stop. SetEvent(ghSvcStopEvent); ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); break; case SERVICE_CONTROL_INTERROGATE: break; case SERVICE_CONTROL_SESSIONCHANGE: ReportSvcStatus(gSvcStatus.dwCurrentState, NO_ERROR, 0); break; default: break; } return dwErrorCode; } With the changes above, my service compiled and install fine. But when I try to start my service on my Windows 2000 machine, it does not start properly (it will be stuck on the "starting" status) Can anyone please advise? Thank you in advance, Ron

    Read the article

  • Flex - Increase timeout on a PHP service function call

    - by Travesty3
    I'm using Flash Builder 4 Beta 2. I have it connecting to a PHP service. The way I set this up was using the wizard, so I didn't actually write the code to connect to it. The service looks like this: package services.flash { import mx.rpc.AsyncToken; import com.adobe.fiber.core.model_internal; import mx.rpc.AbstractOperation; import valueObjects.CustomDatatype8; import valueObjects.NewUsageData; import mx.collections.ItemResponder; import mx.rpc.remoting.RemoteObject; import mx.rpc.remoting.Operation; import com.adobe.fiber.services.wrapper.RemoteObjectServiceWrapper; import com.adobe.fiber.valueobjects.AvailablePropertyIterator; import com.adobe.serializers.utility.TypeUtility; [ExcludeClass] internal class _Super_FLASH extends RemoteObjectServiceWrapper { // Constructor public function _Super_FLASH() { // initialize service control _serviceControl = new RemoteObject(); var operations:Object = new Object(); var operation:Operation; operation = new Operation(null, "sendCommand"); operation.resultType = Object; operations["sendCommand"] = operation; ... } } One of the functions that I'm calling fetches users from a MySQL database. There are about 30,000 users right now. The service seems to timeout when fetching more than around 22,000 rows, I get the "Channel Disconnected before an acknowledgement was received" error. If I call the PHP script from a browser, it fetches them all with no problems at all, however. I have tried increasing the timeout in the PHP script (which didn't work), but obviously this isn't the problem since the browser is able to pull them up with no problems. Is there a way to increase the timeout of the PHP service in Flash Builder? I'm a bit of a noob when it comes to Flash, so please be descriptive. Thanks in advance!

    Read the article

  • Creating a service that takes parameters with WMI

    - by Johan_B
    I want to create a service that takes some parameters. How do I do this with WMI? In the commandline, it was sc create ServiceName binPath= ""C:\Path\Service.exe\" /Service somefile1.xml somefile2.xml" start=auto DisplayName="DisplayName" I have tried to use ManagementBaseObject inParams = managementClass.GetMethodParameters("create"); inParams["CommandLine"] = ""C:\Path\Service.exe\" somefile1.xml somefile2.xml" inParams["StartMode"] = StartMode.Auto; inParasm["DisplayName"] = "displayName"; but I do not know how to pass the two xml files? Any help will be greatly appreciated. Thanks! JB

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Java service wrapper and additional application command line parameters

    - by Jake
    I'm currently using java service wrapper to wrap a java application that I've developed. I'm needing to ability to pass in additional command line parameters to my application through the java service wrapper. Pretend my app is called myapp and I've setup java service wrapper so that the script I run to start is called myapp. I'd like to be able to do something like this: ./myapp start Parameter1 parameter2 and have those additional parameters get passed into my application. Any ideas how to do this? I'm finding that googling and looking at the documentation is only pulling up how to use command line arguments to setup java service wrapper. I've had difficulty finding anything about passing command line arguments to your application except for having them hard coded in your wrapper.conf file. Right now I feel like my option is to take the additional command line parameters, set them to environment variables and have those hard coded in the wrapper.conf. I'd prefer not to go down that road though and am hoping I've overlooked something.

    Read the article

  • Android service killed

    - by Erdal
    I have a Service running in the same process as my Application. Sometimes the Android OS decides to kill my service (probably due to low memory). My question is: does my Application get killed along with the Service? or how does it work exactly? Thanks!

    Read the article

  • Understanding Async Methods in Web Service.

    - by Polaris
    Hello. I consume Java Service in my .NET appliaction. I have one question: When service serialize in .NET he also create Async methods. For example for GetPersons() method service create GetPersonsAsync() which is return void. In which cases I can use this Async methods and what the destination of these methods.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >