Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 300/1167 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • How should I define Pom.xml in each Module so that web module can communicate with the other two ejb modules?

    - by Kayser
    Maven, maven, maven. It must be very nice and it is nice by a small application. Now I want to build an ear project: with two EJB Modules, a web Module and ear module to build an ear file. Web Module is dependent on the other ejb modules.. How should I define Pom.xml in each Module so that web module can communicate with the other two ejb modules in ear and the ear module builds the right ear file? What I have done before: Module 1 -- Basic Module. All other modules are dependent to this Module. Basic functionality like login etc. <packaging>ejb</packaging> Module 1 -- Data Module. All Entites are here Type EJB <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency Module 2 -- Business Module. Businnes Facades are here. Type EJB <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency Web Module - Type is WAR <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency EAR Module -- In this project I try to build the project. <packaging>ear</packaging> <dependencies> <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Basic</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_Business</artifactId> <version>0.0.1-SNAPSHOT</version> <type>ejb</type> </dependency <dependency> <groupId>com.myCompnay</groupId> <artifactId>Modul_WEB</artifactId> <version>0.0.1-SNAPSHOT</version> <type>war</type> </dependency </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> </plugin> </plugins> </build>

    Read the article

  • WCF: WSDL-first approach: Problems with generating fault types

    - by Juri
    Hi, I'm currently in the process of creating a WCF webservice which should be compatible with WS-I Basic Profile 1.1. I'm using a wsdl-first approach (actually for the first time), defining first the xsd for the complex types, the WSDL and then using svcutil.exe for generating the according server as well as client-side interfaces/proxies. So far everything works fine. Then I decided to add a fault to my WSDL. Regenerating with svcutil succeeded, but then I noticed that my generated fault doesn't have the properties I defined in my xsd file (which is imported by my WSDL). Fault XSD definition <?xml version="1.0" encoding="UTF-8"?> <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://product.mycompany.com/groupsfault_v1.xsd" xmlns:tns="http://product.mycompany.com/groupsfault_v1.xsd"> <complexType name="groupsFault"> <sequence> <element name="code" type="int"/> <element name="message" type="string"/> </sequence> </complexType> </schema> Generated .Net fault object [System.Diagnostics.DebuggerStepThroughAttribute()] [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "3.0.0.0")] [System.Xml.Serialization.XmlSchemaProviderAttribute("ExportSchema")] [System.Xml.Serialization.XmlRootAttribute(IsNullable=false)] public partial class groupFault : object, System.Xml.Serialization.IXmlSerializable { private System.Xml.XmlNode[] nodesField; private static System.Xml.XmlQualifiedName typeName = new System.Xml.XmlQualifiedName("groupFault", "http://sicp.services.siag.it/groups_v1.wsdl"); public System.Xml.XmlNode[] Nodes { get { return this.nodesField; } set { this.nodesField = value; } } public void ReadXml(System.Xml.XmlReader reader) { this.nodesField = System.Runtime.Serialization.XmlSerializableServices.ReadNodes(reader); } public void WriteXml(System.Xml.XmlWriter writer) { System.Runtime.Serialization.XmlSerializableServices.WriteNodes(writer, this.Nodes); } public System.Xml.Schema.XmlSchema GetSchema() { return null; } public static System.Xml.XmlQualifiedName ExportSchema(System.Xml.Schema.XmlSchemaSet schemas) { System.Runtime.Serialization.XmlSerializableServices.AddDefaultSchema(schemas, typeName); return typeName; } } Is this ok?? I'd expect to have an object created that contains properties for "code" and "message" s.t. you can then throw it by using something like ... throw new FaultException<groupFault>(new groupFault { code=100, message="error" }); ... (sorry for the lower-case type definitions, but this is generated code from the WSDL) Why doesn't the svcutil.exe generate those properties?? Some sources on the web suggested to add the /useSerializerForFaults option. I tried it, it doesn't work giving me an exception that the fault type is missing on the wsdl:portType declaration. Validation with several other tools succeeded however. Any help is VERY appreciated :) thx

    Read the article

  • how can i overriding the inbound message validation with cxf?

    - by user1648330
    when i input a non-numeric content to a Integer types of field, i have got a fault mesasge 'Not a number: 0.012A'. How to do ability to in prior to the Unmarshal for schema validation and output custom error messages? I used cxf 2.6.1 and also configuration with <entry key="schema-validation-enabled" value="true" ></entry> in cxf-spring.xml. java.lang.RuntimeException: Not a number: 0.012A at com.jiemai.jmservice.handlers.ValidationEventHandler.handleEvent(ValidationEventHandler.java:19) at org.apache.cxf.jaxb.io.DataReaderImpl$WSUIDValidationHandler.handleEvent(DataReaderImpl.java:78) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleEvent(UnmarshallingContext.java:655) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleError(UnmarshallingContext.java:691) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.handleError(UnmarshallingContext.java:687) at com.sun.xml.bind.v2.runtime.unmarshaller.Loader.handleParseConversionException(Loader.java:271) at com.sun.xml.bind.v2.runtime.unmarshaller.LeafPropertyLoader.text(LeafPropertyLoader.java:69) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallingContext.text(UnmarshallingContext.java:514) at com.sun.xml.bind.v2.runtime.unmarshaller.InterningXmlVisitor.text(InterningXmlVisitor.java:93) at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.processText(StAXStreamConnector.java:338) at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.handleEndElement(StAXStreamConnector.java:216) at com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:185) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:370) at com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:349) at org.apache.cxf.jaxb.JAXBEncoderDecoder.doUnmarshal(JAXBEncoderDecoder.java:784) at org.apache.cxf.jaxb.JAXBEncoderDecoder.access$100(JAXBEncoderDecoder.java:97) at org.apache.cxf.jaxb.JAXBEncoderDecoder$1.run(JAXBEncoderDecoder.java:812) at java.security.AccessController.doPrivileged(Native Method) at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:810) at org.apache.cxf.jaxb.JAXBEncoderDecoder.unmarshall(JAXBEncoderDecoder.java:644) at org.apache.cxf.jaxb.io.DataReaderImpl.read(DataReaderImpl.java:157) at org.apache.cxf.interceptor.DocLiteralInInterceptor.handleMessage(DocLiteralInInterceptor.java:108) at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:262) at org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:122) at org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:211) at org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:213) at org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:193) at org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:129) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:187) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:110) at javax.servlet.http.HttpServlet.service(HttpServlet.java:710) at org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:166) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Unknown Source)

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • Strange problems with PHP SOAP (private variable not persist + variables passed from client not work

    - by Tamas Gal
    I have a very strange problems in a PHP Soap implementation. I have a private variable in the Server class which contains the DB name for further reference. The private variable name is "fromdb". I have a public function on the soap server where I can set this variable. $client-setFromdb. When I call it form my client works perfectly and the fromdb private variable can be set. But a second soap client call this private variable loses its value... Here is my soap server setup: ini_set('soap.wsdl_cache_enabled', 0); ini_set('session.auto_start', 0); ini_set('always_populate_raw_post_data', 1); global $config_dir; session_start(); /*if(!$HTTP_RAW_POST_DATA){ $HTTP_RAW_POST_DATA = file_get_contents('php://input'); }*/ $server = new SoapServer("{$config_dir['template']}import.wsdl"); $server-setClass('import'); $server-setPersistence(SOAP_PERSISTENCE_SESSION); $server-handle(); Problem is that I passed this to the server: $client = new SoapClient('http://import.ingatlan.net/wsdl', array('trace' = 1)); $xml=''; $xml.=''; $xml.=''; $xml.=''; $xml.='Valaki'; $xml.=''; $xml.=''; $xml.=''; $xml.=''; $tarray = array("type" = 1, "xml" = $xml); try{ $s = $client-sendXml( $tarray ); print "$s"; } catch( SOAPFault $exception){ print "--- SOAP exception :{$exception}---"; print "LAST REQUEST :"; var_dump($client-_getLastRequest()); print "---"; print "LAST RESPONSE :".$client-_getLastResponse(); } So passed an Array of informations to the server. Then I got this exception: LAST REQUEST : <?xml version="1.0" encoding="UTF-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"><SOAP-ENV:Body><type>Array</type><xml/></SOAP-ENV:Body></SOAP-ENV:Envelope> Can you see the Array word between the type tag? Seems that the client only passed a reference or something like this. So I totaly missed :(

    Read the article

  • If I package a web application with the maven goal war:exploded, why won't pom.xml and pom.propertie

    - by Bernhard V
    Hi, I'm pretty new to Maven and I've noticed an interesting thing in the Maven WAR Plugin. When I package my Java web application with war:war, a zipped war is created. This war contains also the files pom.xml and pom.properties in the META-INF directory. But if I package my application with war:exploded and create an exploded war directory, those two files won't be included. Now I'm curious, why the pom.xml and the pom.properties aren't packaged into the exploded war. Besides those two files the contents of the exploded and the zipped war are equal. Is there a reason why the plugin omits pom.xml and pom.properties from the exploded war?

    Read the article

  • Using normalize-string XPath function from SQL XML query ?

    - by Ross Watson
    Hi, is it possible to run an SQL query, with an XPath "where" clause, and to trim trailing spaces before the comparison ? I have an SQL XML column, in which I have XML nodes with attributes which contain trailing spaces. I would like to find a given record, which has a specified attribute value - without the trailing spaces. When I try, I get... "There is no function '{http://www.w3.org/2004/07/xpath-functions}:normalize-space()'" I have tried the following (query 1 works, query 2 doesn't). This is on SQL 2005. declare @test table (data xml) insert into @test values ('<thing xmlns="http://my.org.uk/Things" x="hello " />') -- query 1 ;with xmlnamespaces ('http://my.org.uk/Things' as ns0) select * from @test where data.exist('ns0:thing[@x="hello "]') != 0 -- query 2 ;with xmlnamespaces ('http://my.org.uk/Things' as ns0) select * from @test where data.exist('ns0:thing[normalize-space(@x)="hello "]') != 0 Thanks for any help, Ross

    Read the article

  • How to make a TableLayout from XML using programmable way ?

    - by user164589
    hi guys, I am trying to add TableLayout to the LinearLayout from resourse(xml) using programmable way. The added TableLayout count is dynamic. It would be between 1 - 10. I think it is better way to make it from resource xml. Because design doesn't break. How to make it ? Actually I don't know how to create a new instance from the XML. Please help. guys. Thanks in advance.

    Read the article

  • How to verify an XML digital signature in Cocoa?

    - by Geoff Smith
    I have a C# application that uses XML digital signatures to sign license files. I've used the standard Microsoft approach described here. I'm porting the application to the MAC and need to verify the signature. My general question is how best to do this? This is what I've done: I've used macport to install Aleksey's xmlsec1 library. Used the Chilkat library to convert my XML public key to a PEM file Chilkat.PublicKey pubKey = new Chilkat.PublicKey(); pubKey.LoadXml(publicKeyXml); pubKey.SaveOpenSslPemFile("publicKey.pem"); Compiled and ran the alekseys sample program. See (http://www.aleksey.com/xmlsec/api/xmlsec-verify-with-key.html) to verify an XML dsig. Result: my license files fail to validate. The call to xmlSecDSigCtxVerify fails with status=unknown. Now for my specific question: What can I do next? Geoff

    Read the article

  • How do I generate coverage xml report for a single package?

    - by Wraith
    I'm using nose and coverage to generate coverage reports. I only have one package right now, ae, so I specify to only cover that: nosetests -w tests/unit --with-xunit --with-coverage --cover-package=ae And here are the results, which look good: Name Stmts Exec Cover Missing ---------------------------------------------- ae 1 1 100% ae.util 253 224 88% 39, 63-65, 284, 287, 362, 406 ---------------------------------------------- TOTAL 263 234 88% ---------------------------------------------------------------------- Ran 68 tests in 5.292s However when I run coverage xml, coverage pulls in more packages than necessary, including python email and logging packages which have nothing to do with my code. If I run coverage xml ae, I get this error: No source for code: '/home/wraith/dev/projects/trimurti/src/ae': [Errno 21] Is a directory: '/home/wraith/dev/projects/trimurti/src/ae' Is there a way to generate the XML for just the ae package?

    Read the article

  • how to pass an xml file to lxml to parse?

    - by BeeBand
    I'm trying to parse an xml file using lxml. xml.etree allowed me to simply pass the file name as a parameter to the parse function, so I attempted to do the same with lxml. My code: from lxml import etree from lxml import objectify file = "C:\Projects\python\cb.xml" tree = etree.parse(file) but I get the error: Traceback (most recent call last): File "cb.py", line 5, in <module> tree = etree.parse(file) File "lxml.etree.pyx", line 2698, in lxml.etree.parse (src/lxml/lxml.etree.c:4 9590) File "parser.pxi", line 1491, in lxml.etree._parseDocument (src/lxml/lxml.etre e.c:71205) File "parser.pxi", line 1520, in lxml.etree._parseDocumentFromURL (src/lxml/lx ml.etree.c:71488) File "parser.pxi", line 1420, in lxml.etree._parseDocFromFile (src/lxml/lxml.e tree.c:70583) File "parser.pxi", line 975, in lxml.etree._BaseParser._parseDocFromFile (src/ lxml/lxml.etree.c:67736) File "parser.pxi", line 539, in lxml.etree._ParserContext._handleParseResultDo c (src/lxml/lxml.etree.c:63820) File "parser.pxi", line 625, in lxml.etree._handleParseResult (src/lxml/lxml.e tree.c:64741) File "parser.pxi", line 565, in lxml.etree._raiseParseError (src/lxml/lxml.etr ee.c:64084) lxml.etree.XMLSyntaxError: AttValue: " or ' expected, line 2, column 26 What am I doing wrong?

    Read the article

  • Is it possible to mix orm.xml definitions with annotations when using JPA/hibernate ?

    - by hatchetman82
    I'm working on an application whihc supports using several DB vendors, with the table definitions being different for each DB type. The trouble is that the column definitions are not what hibernate expects, and so my entities contain a lot of @Column(..columnDefintion="..."...) annotations. To further complicate the issue, there's no way to specify a columnDefinition per DB. So I tried moving just the columnDefinition bits to an orm.xml file, and I have a Maven profile that bundles the correct file. JBoss 4.0.5/Hibernate 3.2.0GA fails to validate, and it seems it completely disregards the annotations given an xml file. Is there a way to make Hibernate "merge" the data from the xml with the annotations ?

    Read the article

  • How to implement python to find value between xml tags?

    - by Harshit Sharma
    I am using google site to retrieve weather information , I want to find values between XML tags. Following code give me weather condition of a city , but I am unable to obtain other parameters such as temperature and if possible explain working of split function implied in the code: import urllib def getWeather(city): #create google weather api url url = "http://www.google.com/ig/api?weather=" + urllib.quote(city) try: # open google weather api url f = urllib.urlopen(url) except: # if there was an error opening the url, return return "Error opening url" # read contents to a string s = f.read() # extract weather condition data from xml string weather = s.split("<current_conditions><condition data=\"")[-1].split("\"")[0] # if there was an error getting the condition, the city is invalid if weather == "<?xml version=": return "Invalid city" #return the weather condition return weather def main(): while True: city = raw_input("Give me a city: ") weather = getWeather(city) print(weather) if __name__ == "__main__": main() Thank You

    Read the article

  • Can we create a class from a xml file ?

    - by panzerschreck
    Hello, Is it possible to create a class dynamically by reading an xml file ( in java preferably) ? if yes, please provide pointers on how to do it. In the process of development, we have come up with a class that has 5 attributes, all these attributes correspond to an entry in the xml file, now if the user adds/modifies the xml entry the object corresponding to it must change automatically, one approach would be generate the source code, before compile time.Is there any other way ? Is there any common pattern to model such changes in the system ? Thanks,

    Read the article

  • How to feed an xml database with tags obtained thru html forms ?

    - by blaise1
    Hello! Not a programmer, I begin with xml, html forms and xslt on Mac. I plan to use a form to post short texts in a xhtml page and invite end users to add some annotations to the said text. The users would select a specific part of the text posted and each annotation would stand for one specific chain of characters. My goal is to consolidate the tags obtained from various user's annotations to one xml "knowledge base" containing the original text with all the revision indicators. Then I plan to use xslt sheets to product various reports based on the tags obtained. my two questions are : 1- am I dreaming ? Is it really possible to do that with xml, xforms, xslt without using java, php, ajax or other seasoned programmer's tools ? 2- What should be my focus for further explorations aiming in that direction ? Which schema, events, sequences should I study ? Je vous remercie à l'avance, Please excuse my English. Blaise

    Read the article

  • Missing archetype.xml in new archetype created from project.

    - by sdoca
    Hi, I am using Maven 2.2.1. and am new to Maven. I am trying to create an archetype based on an existing project. I have been using this blog post as a guide: http://blog.inflinx.com/category/m2eclipse/ Step 7 says "The next step is to verify that the generated archetype.xml (located under srcmainresourcesMETA-INFmaven (sic)) has all the files listed and they are located in the right directories." My generated archetype does not have an archetype.xml. It does have an archetype-metadata.xml file though. There aren't any errors in the output from calling archetype:create-from-project, just a couple of warnings about using platform encoding (Cp1252). Any thoughts on why this file wasn't generated? Thanks in advance!

    Read the article

  • SQL Server 2008 vs 2005 udf xml perfomance problem.

    - by user344495
    Ok we have a simple udf that takes a XML integer list and returns a table: CREATE FUNCTION [dbo].[udfParseXmlListOfInt] ( @ItemListXml XML (dbo.xsdListOfInteger) ) RETURNS TABLE AS RETURN ( --- parses the XML and returns it as an int table --- SELECT ListItems.ID.value('.','INT') AS KeyValue FROM @ItemListXml.nodes('//list/item') AS ListItems(ID) ) In a stored procedure we create a temp table using this UDF INSERT INTO @JobTable (JobNumber, JobSchedID, JobBatID, StoreID, CustID, CustDivID, BatchStartDate, BatchEndDate, UnavailableFrom) SELECT JOB.JobNumber, JOB.JobSchedID, ISNULL(JOB.JobBatID,0), STO.StoreID, STO.CustID, ISNULL(STO.CustDivID,0), AVL.StartDate, AVL.EndDate, ISNULL(AVL.StartDate, DATEADD(day, -8, GETDATE())) FROM dbo.udfParseXmlListOfInt(@JobNumberList) TMP INNER JOIN dbo.JobSchedule JOB ON (JOB.JobNumber = TMP.KeyValue) INNER JOIN dbo.Store STO ON (STO.StoreID = JOB.StoreID) INNER JOIN dbo.JobSchedEvent EVT ON (EVT.JobSchedID = JOB.JobSchedID AND EVT.IsPrimary = 1) LEFT OUTER JOIN dbo.Availability AVL ON (AVL.AvailTypID = 5 AND AVL.RowID = JOB.JobBatID) ORDER BY JOB.JobSchedID; For a simple list of 10 JobNumbers in SQL2005 this returns in less than 1 second, in 2008 this run against the exact same data returns in 7 min. This is on a much faster machine with more memory. Any ideas?

    Read the article

  • Firefox: how to apply XSLT from plugin to opened XML document? (how to replace document with another

    - by aloispaulin
    Hi! I've developed a plug-in for Firefox, that can read and manipulate the content of the currently opened document. I would like to apply a XLST to the document in case it is XML. I have no problem to read the XML document and apply a XSLT to it in memory, however, I have no idea of how to replace the existing document with the newly created one. Thus, I see two possible scenarios: a) I replace the document with the result of the transformation b) I apply the XSLT directly to the XML All my attempts to realize one of both possible solutions have failed... Hoping the community can provide help! Many tnx in advance! Alois

    Read the article

  • is opening and closing of factory contolled by web.xml?

    - by akshay
    This post is related to post InvalidStateException while trying to enter data into DB. Do i need to put some entries in web.xml?Does web.xml control opening and closing of factory?I saw folloing entries in web.xml of another similar project . <resource-ref> <res-ref-name>jms/XYConnectionFactory</res-ref-name> <res-type>javax.jms.ConnectionFactory</res-type> <res-auth>Container</res-auth> <res-sharing-scope>Unshareable</res-sharing-scope></resource-ref> <resource-env-ref> <resource-env-ref-name>rep/xyAppConfig</resource-env-ref-name> <resource-env-ref-type>java.util.Map</resource-env-ref-type></resource-env-ref> What does this entries do?

    Read the article

  • Is Embed Resource a good approach for a read only xml database?

    - by Nasser Hajloo
    I have an open source application (here) This application get a character or a sentence and give some unicode information about it. Iuse Unicode Character Database which provided by Unicode.org this is a XML document (130MB) At first I embed this XML to my DLL but I don't know is it a good approach or no. because DLL size growth just because of this XML document. I can use it like any other resources but usercan see it. What Should I do? What is the best pattern for this? and Why ? TIA

    Read the article

  • Can Eclipse not hard-code ECLIPSE_HOME when exporting build.xml?

    - by stevex
    I have an Eclipse project that I'm attempting to set up to build both with Eclipse and externally with Ant. It seems like a good way to do this is to have Eclipse generate a build.xml file that I can then use with ant. I'd like to set it up so the build.xml can be regenerated from Eclipse whenever the need arises, which means no hand-editing the build.xml file. But Eclipse writes one entry in there that has a hard-coded path to a directory on my computer, which makes it unsuitable for checking in to a source repository. Specifically it's this entry that's the trouble: <property name="ECLIPSE_HOME" value="D:/Eclipse/Eclipse Galileo (3.5) SR1"/> Is there some way to have Eclipse not output this line, or to make it a relative reference or something that makes sense to check in?

    Read the article

  • Workaround for datadude deployment bug - NullReferenceException

    - by jamiet
    I have come across a bug in Visual Studio 2010 Database Projects (aka datadude aka DPro aka Visual Studio Database Development Tools aka Visual Studio Team Edition for Database Professionals aka Juneau aka SQL Server Data Tools) that other people may encounter so, for the purposes of googling, I'm writing this blog post about it. Through my own googling I discovered that a Connect bug had already been raised about it (VS2010 Database project deploy - “SqlDeployTask” task failed unexpectedly, NullReferenceException), and coincidentally enough it was raised by my former colleague Tom Hunter (whom I have mentioned here before as the superhuman Tom Hunter) although it has not (at this time) received a reply from Microsoft. Tom provided a repro, namely that this syntactically valid function definition: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN (    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte) would produce this nasty unhelpful error upon deployment: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\TeamData\Microsoft.Data.Schema.TSqlTasks.targets(120,5): Error MSB4018: The "SqlDeployTask" task failed unexpectedly.System.NullReferenceException: Object reference not set to an instance of an object.   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.VariableSubstitution(SqlScriptProperty propertyValue, IDictionary`2 variables, Boolean& isChanged)   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.ArePropertiesEqual(IModelElement source, IModelElement target, ModelPropertyClass propertyClass, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareProperties(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareAllElementsForOneType(ModelElementClass type, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean compareOrphanedElements)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareStore(ModelStore source, ModelStore target, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.Build.SchemaDeployment.CompareModels()   at Microsoft.Data.Schema.Build.SchemaDeployment.PrepareBuildPlan()   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute(Boolean executeDeployment)   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute()   at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute()   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()   at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult)   Done executing task "SqlDeployTask" -- FAILED.  Done building target "DspDeploy" in project "Lloyds.UKTax.DB.UKtax.dbproj" -- FAILED. Done executing task "CallTarget" -- FAILED.Done building target "DBDeploy" in project It turns out there are a certain set of circumstances that need to be met for this error to occur: The object being deployed is an inline function  (may also exist for multistatement and scalar functions - I haven't tested that) That object includes SQLCMD variable references The object has already been deployed successfully Just to reiterate that last bullet point, the error does not occur when you deploy the function for the first time, only on the subsequent deployment.   Luckily I have a direct line into a guy on the development team so I fired off an email on Friday evening and today (Monday) I received a reply back telling me that there is a simple fix, one simply has to remove the parentheses that wrap the SQL statement. So, in the case of Tom's repro, the function definition simpy has to be changed to: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN --(    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte--) I have commented out the offending parentheses rather than removing them just to emphasize the point. Thereafter the function will deploy fine. I tested this out on my own project this morning and can confirm that this fix does indeed work.   I have been told that the bug CAN be reproduced in the Release Candidate (RC) 0 build of SQL Server Data Tools in SQL Server 2010 so am hoping that a fix makes it in for the Release-To-Manufacturing (RTM) build. Hope this helps @jamiet

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • Solution to Jira web service getWorklogs method error: Object of type System.Xml.XmlNode[] cannot be stored in an array of this type

    - by DigiMortal
    When using Jira web service methods that operate on work logs you may get the following error when running your .NET application: Object of type System.Xml.XmlNode[] cannot be stored in an array of this type. In this posting I will show you solution to this problem. I don’t want to go to deep in details about this problem. I think it’s enough for this posting to mention that this problem is related to one small conflict between .NET web service support and Axis. Of course, Jira team is trying to solve it but until this problem is solved you can use solution provided here. There is good solution to this problem given by Jira forum user Kostadin. You can find it from Jira forum thread RemoteWorkLog serialization from Soap Service in C#. Solution is simple – you have to use SOAP extension class to replace new class names with old ones that .NET found from WSDL. Here is the code by Kostadin. public class JiraSoapExtensions : SoapExtension {     private Stream _streamIn;     private Stream _streamOut;       public override void ProcessMessage(SoapMessage message)     {         string messageAsString;         StreamReader reader;         StreamWriter writer;           switch (message.Stage)         {             case SoapMessageStage.BeforeSerialize:                 break;             case SoapMessageStage.AfterDeserialize:                 break;             case SoapMessageStage.BeforeDeserialize:                 reader = new StreamReader(_streamOut);                 writer = new StreamWriter(_streamIn);                 messageAsString = reader.ReadToEnd();                 switch (message.MethodInfo.Name)                 {                     case "getWorklogs":                     case "addWorklogWithNewRemainingEstimate":                     case "addWorklogAndAutoAdjustRemainingEstimate":                     case "addWorklogAndRetainRemainingEstimate":                         messageAsString = messageAsString.                             .Replace("RemoteWorklogImpl", "RemoteWorklog")                             .Replace("service", "beans");                         break;                 }                 writer.Write(messageAsString);                 writer.Flush();                 _streamIn.Position = 0;                 break;             case SoapMessageStage.AfterSerialize:                 _streamIn.Position = 0;                 reader = new StreamReader(_streamIn);                 writer = new StreamWriter(_streamOut);                 messageAsString = reader.ReadToEnd();                 writer.Write(messageAsString);                 writer.Flush(); break;         }     }       public override Stream ChainStream(Stream stream)     {         _streamOut = stream;         _streamIn = new MemoryStream();         return _streamIn;     }       public override object GetInitializer(Type type)     {         return GetType();     }       public override object GetInitializer(LogicalMethodInfo info,         SoapExtensionAttribute attribute)     {         return null;     }       public override void Initialize(object initializer)     {     } } To get this extension work with Jira web service you have to add the following block to your application configuration file (under system.web section). <webServices>   <soapExtensionTypes>    <add type="JiraStudioExperiments.JiraSoapExtensions,JiraStudioExperiments"           priority="1"/>   </soapExtensionTypes> </webServices> Weird thing is that after successfully using this extension and disabling it everything still works.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >