Search Results

Search found 3222 results on 129 pages for 'mercurial queue'.

Page 13/129 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • GAE Task Queue oddness

    - by b3nw
    I have been testing the taskqueue with mixed success. Currently I am using the default queue, in default settings ect ect.... I have a test url setup which inserts about 8 tasks into the queue. With short order, all 8 are completed properly. So far so good. The problem comes up when I re-load that url twice under say a minute. Now watching the task queue, all the tasks are added properly, but only the first batch execute it seems. But the "Run in Last Minute" # shows the right number of tasks being run.... The request logs tell a different story. They show only the first set of 8 running, but all task creation urls working successfully. The oddness of this is that if I wait say a minute between the task creation url requests, it will work fine. Oddly enough changing the bucket_size or execution speed does not seem to help. Only the first batch are executed. I have also reduced the number of requests all the way down to 2, and still found only the first 2 execute. Any others added display the same issues as above. Any suggestions? Thanks

    Read the article

  • Polymorphic Queue

    - by metdos
    Hello Everyone, I'm trying to implement a Polymorphic Queue. Here is my trial: QQueue <Request *> requests; while(...) { QString line = QString::fromUtf8(client->readLine()).trimmed(); if(...)){ Request *request=new Request(); request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); //this initialize variables in request using tcpMessage if(request->requestType==REQUEST_LOGIN){ LoginRequest loginRequest; request=&loginRequest; request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); requests.enqueue(request); } //Here pointers in "requests" do not point to objects I created above, and I noticed that their destructors are also called. LoginRequest *loginRequest2=dynamic_cast<LoginRequest *>(requests.dequeue()); loginRequest2->decodeFromTcpMessage(); } } Unfortunately, I could not manage to make work Polymorphic Queue with this code because of the reason I mentioned in second comment.I guess, I need to use smart-pointers, but how? I'm open to any improvement of my code or a new implementation of polymorphic queue. Thanks.

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • How do you handle files that can't support concurrent edits in Mercurial?

    - by Scott Whitlock
    I'm using Mercurial with TortoiseHg. Each developer has their own repositories, and there's one central repository on the server for synchronizing our changes. (This will sound lame, but we're using it to manage the source for a legacy VB6 project. Nothing we can do about that...) As has been pointed out elsewhere, there is a big problem in VB6 with merging the .frx (form resources) files. So code changes seem to merge fine, but if two developers both make changes at the same time in the form design view, we can't merge. I'm ok with disallowing concurrent edits, but of course the whole point of Mercurial is that it's distributed so there is no option to force a file to be locked before editing. I don't believe there's a Mercurial solution for this, so I'm wondering: other developers who are using Mercurial for version control, do you have some 3rd party tool that assists with locking files for editing in the cases where it's necessary? Did we make a mistake using Mercurial instead of something like SVN?

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • What's the best way of accessing a DRb object (e.g. Ruby Queue) from Scala (and Java)?

    - by Tom Morris
    I have built a variety of little scripts using Ruby's very simple Queue class, and share the Queue between Ruby and JRuby processes using DRb. It would be nice to be able to access these from Scala (and maybe Java) using JRuby. I've put together something Scala and the JSR-223 interface to access jruby-complete.jar. import javax.script._ class DRbQueue(host: String, port: Int) { private var engine = DRbQueue.factory.getEngineByName("jruby") private var invoker = engine.asInstanceOf[Invocable] engine.eval("require \"drb\" ") private var queue = engine.eval("DRbObject.new(nil, \"druby://" + host + ":" + port.toString + "\")") def isEmpty(): Boolean = invoker.invokeMethod(this.queue, "empty?").asInstanceOf[Boolean] def size(): Long = invoker.invokeMethod(this.queue, "length").asInstanceOf[Long] def threadsWaiting: Long = invoker.invokeMethod(this.queue, "num_waiting").asInstanceOf[Long] def offer(obj: Any) = invoker.invokeMethod(this.queue, "push", obj.asInstanceOf[java.lang.Object]) def poll(): Any = invoker.invokeMethod(this.queue, "pop") def clear(): Unit = { invoker.invokeMethod(this.queue, "clear") } } object DRbQueue { var factory = new ScriptEngineManager() } (It conforms roughly to java.util.Queue interface, but I haven't declared the interface because it doesn't implement the element and peek methods because the Ruby class doesn't offer them.) The problem with this is the type conversion. JRuby is fine with Scala's Strings - because they are Java strings. But if I give it a Scala Int or Long, or one of the other Scala types (List, Set, RichString, Array, Symbol) or some other custom type. This seems unnecessarily hacky: surely there has got to be a better way of doing RMI/DRb interop without having to use JSR-223 API. I could either make it so that the offer method serializes the object to, say, a JSON string and takes a structural type of only objects that have a toJson method. I could then write a Ruby wrapper class (or just monkeypatch Queue) to would parse the JSON. Is there any point in carrying on with trying to access DRb from Java/Scala? Might it just be easier to install a real message queue? (If so, any suggestions for a lightweight JVM-based MQ?)

    Read the article

  • Corrupt mercurial repository - cannot update

    - by richzilla
    Hi all, i think ive managed to corrupt one of mercurial repositories, is there anyway to recover the damage? the message i get when attempting to update the repo to my default branch is: PS C:\wco\projects\ims\code> hg up -C default abort: data/Trunk/application/models/priority_model.php.i@83dbdfb60981: no match found! when i run hg verify, i get the following report: PS C:\wco\projects\ims\code> hg verify checking changesets checking manifests crosschecking files in changesets and manifests checking files data/.buildpath.i@0: missing revlog! 0: empty or missing .buildpath .buildpath@0: daf15acb5273 in manifests not found data/.project.i@0: missing revlog! 0: empty or missing .project .project@0: e8b25997d6d1 in manifests not found Rawlings IMS/index.html@3: empty or missing copy source revlog index.html:b8b2b36a6eea Rawlings IMS/index.php@3: empty or missing copy source revlog index.php:a5f76bf232e9 Rawlings IMS/index_backup.php@3: empty or missing copy source revlog index_backup.php:a5f76bf232e9 Rawlings IMS/index_this_hold.html@3: empty or missing copy source revlog index_this_hold.html:b8b2b36a6eea Rawlings IMS/testera.php@3: empty or missing copy source revlog testera.php:6e9e3666c1bc Trunk/application/models/priority_model.php@32: 83dbdfb60981 in manifests not found data/index.html.i@0: missing revlog! 0: empty or missing index.html index.html@0: b8b2b36a6eea in manifests not found data/index.php.i@0: missing revlog! 0: empty or missing index.php index.php@0: a5f76bf232e9 in manifests not found data/index_backup.php.i@0: missing revlog! 0: empty or missing index_backup.php index_backup.php@0: a5f76bf232e9 in manifests not found data/index_this_hold.html.i@0: missing revlog! 0: empty or missing index_this_hold.html index_this_hold.html@0: b8b2b36a6eea in manifests not found data/nbproject/project.properties.i@14: missing revlog! 14: empty or missing nbproject/project.properties nbproject/project.properties@14: 1efa3074378b in manifests not found data/nbproject/project.xml.i@14: missing revlog! 14: empty or missing nbproject/project.xml nbproject/project.xml@14: 52689bf2b35a in manifests not found data/rcrawley/.buildpath.i@9: missing revlog! 9: empty or missing rcrawley/.buildpath rcrawley/.buildpath@9: 2c6411d4fcff in manifests not found data/rcrawley/.project.i@9: missing revlog! 9: empty or missing rcrawley/.project rcrawley/.project@9: cb4b1495acab in manifests not found data/rcrawley/assets/images/ajax-loader.gif.i@9: missing revlog! 9: empty or missing rcrawley/assets/images/ajax-loader.gif rcrawley/assets/images/ajax-loader.gif@9: 8ef26516c7da in manifests not found data/rcrawley/index.html.i@9: missing revlog! 9: empty or missing rcrawley/index.html rcrawley/index.html@9: 545246299dee in manifests not found data/rcrawley/index.php.i@9: missing revlog! 9: empty or missing rcrawley/index.php rcrawley/index.php@9: 66eac798ed6d in manifests not found data/rcrawley/index_backup.php.i@9: missing revlog! 9: empty or missing rcrawley/index_backup.php rcrawley/index_backup.php@9: 93971228fbf5 in manifests not found data/rcrawley/system/index.html.i@9: missing revlog! 9: empty or missing rcrawley/system/index.html rcrawley/system/index.html@9: 17360d3b566e in manifests not found data/rcrawley/testera.php.i@9: missing revlog! 9: empty or missing rcrawley/testera.php rcrawley/testera.php@9: 7d9bd5196740 in manifests not found data/testera.php.i@0: missing revlog! 0: empty or missing testera.php testera.php@0: 6e9e3666c1bc in manifests not found 1646 files, 59 changesets, 1897 total revisions 57 integrity errors encountered! (first damaged changeset appears to be 0) since the first damaged changeset is 0, this would appear to be pretty terminal. Is there anyway around this? any help would be appreciated.

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

  • Web Site Performance and Assembly Versioning – Part 3 Versioning Combined Files Using Mercurial

    - by capgpilk
    Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion Versioning Combined Files Using Mercurial – this post I have worked on a project recently where there was a need to version the system (library dll, css and javascript files) by date and Mercurial revision number. This was in the format:- 0.12.524.407 {major}.{year}.{month}{date}.{mercurial revision} Each time there is an internal build using the CI server, it would label the files using this format. When it came time to do a major release, it became v1.{year}.{month}{date}.{mercurial revision}, with each public release having a major version increment. Also as a requirement, each assembly also had to have a new GUID on each build. So like in previous posts, we need to edit the csproj file, and add a couple of Default targets. 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Hg-Revision;AssemblyInfo;Build" 3: xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 4: <PropertyGroup> Right below the closing tag of the entire project we add our two targets, the first is to get the Mercurial revision number. We first need to import the tasks for MSBuild which can be downloaded from http://msbuildhg.codeplex.com/ 1: <Import Project="..\Tools\MSBuild.Mercurial\MSBuild.Mercurial.Tasks" />   1: <Target Name="Hg-Revision"> 2: <HgVersion LocalPath="$(MSBuildProjectDirectory)" Timeout="5000" 3: LibraryLocation="C:\TortoiseHg\"> 4: <Output TaskParameter="Revision" PropertyName="Revision" /> 5: </HgVersion> 6: <Message Text="Last revision from HG: $(Revision)" /> 7: </Target> With the main Mercurial files being located at c:\TortoiseHg To get a valid GUID we need to escape from the csproj markup and call some c# code which we put in a property group for later reference. 1: <PropertyGroup> 2: <GuidGenFunction> 3: <![CDATA[ 4: public static string ScriptMain() { 5: return System.Guid.NewGuid().ToString().ToUpper(); 6: } 7: ]]> 8: </GuidGenFunction> 9: </PropertyGroup> Now we add in our target for generating the GUID. 1: <Target Name="AssemblyInfo"> 2: <Script Language="C#" Code="$(GuidGenFunction)"> 3: <Output TaskParameter="ReturnValue" PropertyName="NewGuid" /> 4: </Script> 5: <Time Format="yy"> 6: <Output TaskParameter="FormattedTime" PropertyName="year" /> 7: </Time> 8: <Time Format="Mdd"> 9: <Output TaskParameter="FormattedTime" PropertyName="daymonth" /> 10: </Time> 11: <AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs" 12: AssemblyTitle="name" AssemblyDescription="description" 13: AssemblyCompany="none" AssemblyProduct="product" 14: AssemblyCopyright="Copyright ©" 15: ComVisible="false" CLSCompliant="true" Guid="$(NewGuid)" 16: AssemblyVersion="$(Major).$(year).$(daymonth).$(Revision)" 17: AssemblyFileVersion="$(Major).$(year).$(daymonth).$(Revision)" /> 18: </Target> So this will give use an AssemblyInfo.cs file like this just prior to calling the Build task:- 1: using System; 2: using System.Reflection; 3: using System.Runtime.CompilerServices; 4: using System.Runtime.InteropServices; 5:  6: [assembly: AssemblyTitle("name")] 7: [assembly: AssemblyDescription("description")] 8: [assembly: AssemblyCompany("none")] 9: [assembly: AssemblyProduct("product")] 10: [assembly: AssemblyCopyright("Copyright ©")] 11: [assembly: ComVisible(false)] 12: [assembly: CLSCompliant(true)] 13: [assembly: Guid("9C2C130E-40EF-4A20-B7AC-A23BA4B5F2B7")] 14: [assembly: AssemblyVersion("0.12.524.407")] 15: [assembly: AssemblyFileVersion("0.12.524.407")] Therefore giving us the correct version for the assembly. This can be referenced within your project whether web or Windows based like this:- 1: public static string AppVersion() 2: { 3: return Assembly.GetExecutingAssembly().GetName().Version.ToString(); 4: } As mentioned in previous posts in this series, you can label css and javascript files using this version number and the GetAssemblyIdentity task from the main MSBuild task library build into the .Net framework. 1: <GetAssemblyIdentity AssemblyFiles="bin\TheAssemblyFile.dll"> 2: <Output TaskParameter="Assemblies" ItemName="MyAssemblyIdentities" /> 3: </GetAssemblyIdentity> Then use this to write out the files:- 1: <WriteLinestoFile 2: File="Client\site-style-%(MyAssemblyIdentities.Version).combined.min.css" 3: Lines="@(CSSLinesSite)" Overwrite="true" />

    Read the article

  • Task queue java

    - by user268515
    Hi i'm new to Task queue concepts when i referred the guide I got struck on this line queue.add( DatastoreServiceFactory.getDatastoreService().getCurrentTransaction(), TaskOptions().url("/path/to/my/worker")); what is TaskOptions() method. Is it default method are method created manually what will TaskOptions() method will return. I created a method called TaskOption() when i to return a string value its saying error as "The method url(String) is undefined for the type String" In url what i want to specify servlet are any other. My doubt may be stupid but plz clarify it. Thank you, sharun.

    Read the article

  • What is TombstonedTaskError from App Engine's Task Queue?

    - by dbr
    That does the TombstonedTaskError mean? It is being raised while trying to add a task to the queue, from a cron-job: Traceback (most recent call last): File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 501, in __call__ handler.get(*groups) File "/base/data/home/apps/.../tasks.py", line 132, in get ).add(queue_name = 'userfeedcheck') File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 495, in add return Queue(queue_name).add(self) File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 563, in add self.__TranslateError(e) File "/base/python_lib/versions/1/google/appengine/api/labs/taskqueue/taskqueue.py", line 619, in __TranslateError raise TombstonedTaskError(error.error_detail) TombstonedTaskError Searching the documentation only has the following to say: exception TombstonedTaskError(InvalidTaskError) Task has been tombstoned. ..which isn't particularly helpful. I couldn't find anything useful in the App Engine code either..

    Read the article

  • Best way to use SFTP folder as concurrent work queue

    - by Gabe Moothart
    I am writing a c# windows service which will be polling an SFTP folder for new files (one file = one job) and processing them. Multiple instances of the service may be running at the same time, so it is important that they do not step on each other. I realize that an SFTP folder does not make an ideal queue, but that's what I have to work with. What do I need to do to either use this SFTP folder as a concurrent message queue, or safely represent it in a way that can be used concurrently?

    Read the article

  • create a queue of process in classic asp

    - by austin powers
    Hi, here is the problem : there is classic asp app which is calling lame.exe for encoding mp3s for lots of time per day and there is no control of the way of calling lame.exe from several users in another word there is no queue for that purpose. so here is what I am thinking about : //below code all are pseudo-code //process_flag and mp3 and processId all are reside in a database function addQ(string mp3) add a record to database and set process_flag to undone then goto checkQ end function function checkQ() if there is a process in queue list sort in by processID asc for each processID processQ(processID) end for end function function ProcessQ(int processID) run lame.exe with the help of wscript.exe after doing the job set the process_flag to done end function so I just want to know is there any better solution? or any other approaches out there? regards.

    Read the article

  • Unique task queue task names only for active duration

    - by antony.trupe
    I want to guarantee that a task is not in a task queue more then once, so I generate a unique name based on it's payload. But, that task name is reserved for up to 7 days, which is not what I want; I only want it reserved for the duration the task is queued; it could be immediately re-queued. Once a Task with name N is written, any subsequent attempts to insert a Task named N will fail. Eventually (at least seven days after the task successfully executes), the task will be deleted and the name N can be reused. Is there a way to check if the named task is already in the queue then add it if it's not? Or a totally different approach?

    Read the article

  • message queue : selection and sizing

    - by user238591
    Hi, I have 20 messages/s, each 1 - 1.5 Mbytes. I need High Availability (2 to 4 servers min). I need low latencey (high daily volume - full RAM prefered). I need persistent poisoned messages queue. Only few clients (about 16), locally. I can have 12-16G bytes RAM per server (brooker). Which JMS message queue / messaging would you recommend ? On what configuration (CPU/RAM) ? Can I propose optionnal NAS persistence (in case of final delivery failure) ? Thanks

    Read the article

  • MD3200i Slow Performance and Queue Depth

    - by Caleb_S
    Read performance on our SAN is slow under certain workloads. When we compare this to some local storage, we find the local storage performing 2x as fast. The SAN performs well with a high Queue Depth, and poorly with a low queue depth. However, the local storage performs well with a low Queue Depth. I'd like to know the reason for this occurring and find out what the specific limiting factor is in this situation. MD3200i iSCSI SAN ($15,000) 6 x 600GB 15k SAS RAID5 6 x 2TB 7.2k NLS RAID5 XCOPY /j Benchmark: (Slow) 15k Array - 71MB/s (Queue Depth 1) 7.2k Array- 71MB/s (Queue Depth 1) Robycopy /MT:32 Benchmark: (Fast) 15k Array - 171MB/s (Queue Depth ~12) 7.2k Array- 128MB/s (Queue Depth ~12) , , Read Performance on a Local controller is fast under the workload the SAN is slow at. , HighPoint 2230 RAID Controller ($600) 4 x 1TB 7.2k SATA RAID5 XCOPY /j Benchmark: 7.2k Array - 145MB/s (Queue Depth 1) (appears to max out the SATA bus)

    Read the article

  • MSMQ - Message Queue Abstraction and Pattern

    - by Maxim Gershkovich
    Hi All, Let me define the problem first and why a messagequeue has been chosen. I have a datalayer that will be transactional and EXTREMELY insert heavy and rather then attempt to deal with these issues when they occur I am hoping to implement my application from the ground up with this in mind. I have decided to tackle this problem by using the Microsoft Message Queue and perform inserts as time permits asynchronously. However I quickly ran into a problem. Certain inserts that I perform may need to be recalled (ie: retrieved) immediately (imagine this is for POS system and what happens if you need to recall the last transaction - one that still hasn’t been inserted). The way I decided to tackle this problem is by abstracting the MessageQueue and combining it in my data access layer thereby creating the illusion of a single set of data being returned to the user of the datalayer (I have considered the other issues that occur in such a scenario (ie: essentially dirty reads and such) and have concluded for my purposes I can control these issues). However this is where things get a little nasty... I’ve worked out how to get the messages back and such (trivial enough problem) but where I am stuck is; how do I create a generic (or at least somewhat generic) way of querying my message queue? One where I can minimize the duplication between the SQL queries and MessageQueue queries. I have considered using LINQ (but have very limited understanding of the technology) and have also attempted an implementation with Predicates which so far is pretty smelly. Are there any patterns for such a problem that I can utilize? Am I going about this the wrong way? Does anyone have an of their own ideas about how I can tackle this problem? Does anyone even understand what I am talking about? :-) Any and ALL input would be highly appreciated and seriously considered… Thanks again.

    Read the article

  • Utilizing a Queue

    - by Nathan
    I'm trying to store records of transactions all together and by category for the last 1, 7, 30 or 360 days. I've tried a couple things, but they've brutally failed. I had an idea of using a queue with 360 values, one for each day, but I don't know enough about queue's to figure out how that would work. Input will be an instance of this class: class Transaction { public string TotalEarned { get; set; } public string TotalHST { get; set; } public string TotalCost { get; set; } public string Category { get; set; } } New transactions can occur at any time during the day, and there could be as many as 15 transactions in a day. My program is using a plain text file as external storage, but how I load it depends on how I decide to store this data. What would be the best way to do this?

    Read the article

  • Multi-user job/task tracking/queue software

    - by Bmsgaffer86
    Background: I test and repair electronic products with a team. There are many 'jobs' going through my lab at any point in time. It is getting difficult to track whats coming in and going out because I don't do every test or repair myself. Target: User can enter a job when they drop it off in my lab, and it will appear on the master list or queue. Needs to have priorities and due dates that can be adjusted by users. Ideally this would be web based and open-source, but I am flexible. Dream: A large monitor displaying a list of jobs in the master queue with details. This is very optional though, and would be in the best case scenario. I have done MANY hours of Google-ing and I am not sure if I have been using the right terminology, but I have not found anything that is simple enough to stand alone yet complex enough to be multi-user based. I am mildly proficient in VB, and have the drive to piece anything together that I have to. I am open to ANY help or suggestions.

    Read the article

  • Making mercurial subrepositories behave like subversion externals

    - by Emily Dickinson
    Hi guys, The FAQ, and hginit.com have been really useful for helping me make the transition from svn to hg. However, when it comes to using Hg's subrepository feature in the manner of subversion's externals, I've tried everythign and cannot replicate the nice behavior of svn externals. Here's the simplest example of what I want to do: Init "lib" repository This repository is never to be used as a standalone; it's always included by main repositories, as a sub-repository. Init one or more including repositories To keep the example simple, I'll "init" a repository called "main" Have "main" include "lib" as a subrepository Importantly -- AND HERE'S WHAT I CAN'T GET TO WORK: When I modify a file inside of "main/lib", and I push the modification, then that change gets pushed to the "lib" repository -- NOT to a copy inside of "main". Command lines speak louder than words. I've tried so many variations on this theme, but here's the gist. If someone can reply, in command lines, I'll be forever grateful! 1. Init "lib" repository $ cd /home/moi/hgrepos ## Where I'm storing my hg repositories, on my main server $ hg init lib $ echo "foo" lib/lib.txt $ hg add lib $ hg ci -A -m "Init lib" lib 2. Init "main" repository, and include "lib" as a subrepos $ cd /home/moi/hgrepos $ hg init main $ echo "foo" main/main.txt $ hg add main $ cd main $ hg clone ../lib lib $ echo "lib=lib" .hgsub $ hg ci -A -m "Init main" . This all works fine, but when I make a clone of the "main" repository, and make local modifications to files in "main/lib", and push them, the changes get pushed to "main/lib", NOT to "lib". IN COMMAND-LINE-ESE, THIS IS THE PROBLEM: $ /home/moi/hg-test $ hg clone ssh://[email protected]/hgrepos/lib lib $ hg clone ssh://[email protected]/hgrepos/main main $ cd main $ echo foo lib/lib.txt $ hg st M lib.txt $ hg com -m "Modified lib.txt, from inside the main repos" lib.txt $ hg push pushing to ssh://[email protected]/hgrepos/main/lib That last line of output from hg shows the problem. It shows that I've made a modification to a COPY of a file in lib, NOT to a file in the lib repository. If this were working as I'd like it to work, the push would be to hgrepos/lib, NOT to hgrepos/main/lib. I.e., I would see: $ hg push pushing to ssh://[email protected]/hgrepos/lib IF YOU CAN ANSWER THIS IN TERMS OF COMMAND LINES RATHER THAN IN ENGLISH, I WILL BE ETERNALLY GRATEFUL! Thank you in advance! Emily in Portland

    Read the article

  • Mercurial branching a branch doesn't display right in hg serve or hg view

    - by Mystic
    I've been doing some development on a branch and realized that before it could be complete something else need to be done first. I decided that I would branch my current branch and do the requiste changes in that branch then merge them back together and then merge my working branch into default. Basically I expected this: | | + requiste work branch commit. | |/ | + working branch commit |/ +Default branch commit and in the end what I expect to do is this: + Merge into defualt |\ | + Merge requisite work into working branch | | \ | | + requiste work branch commit. | |/ | + working branch commit |/ +Default branch commit What I'm getting in both hg view and hg serve is this: | + requiste work branch commit. | | | + working branch commit |/ +Default branch commit However, when I look at the commit log "requiste work branch commit" is marked as a part of a different branch. Am I doing something wrong? Is this a bug in hg view and hg serve? Anyone experienced this before?

    Read the article

  • Mercurial hg clone error - "abort: error: Name or service not known"

    - by Bojan Milankovic
    I have installed the latest hg package available for Fedora Linux. However, hg clone reports an error. hg clone http://localmachine001:8000/ repository reports: "abort: error: Name or service not known" localmachine001 is a computer within the local network. I can ping it from my Linux box without any problems. I can also use the same http address and browse the existing code. However, hg clone does not work. If I execute the same command from my Macintosh machine, I can easily clone the repository. Some Internet resources recommend editing .hgrc file, and adding proxy to it: [http_proxy] host=proxy:8080 I have tried that without any success. Also, I assume that proxy is not needed in this case, since the hg server machine is in my local network. Can anyone recommend me what should I do, or how could I track the problem? Thank you in advance.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >