Search Results

Search found 22267 results on 891 pages for 'org mode'.

Page 185/891 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Error: Can't find common super class of ...

    - by PatlaDJ
    I am trying to process with Proguard a MS Windows desktop application (Java 6 SE using the SWT lib provided by Eclipse). And I get the following critical error: Unexpected error while performing partial evaluation: Class = [org/eclipse/swt/widgets/DateTime] Method = [<init>(Lorg/eclipse/swt/widgets/Composite;I)V] Exception = [java.lang.IllegalArgumentException] (Can't find common super class of [java/lang/StringBuffer] and [org/eclipse/swt/internal/win32/TCHAR]) Error: Can't find common super class of [java/lang/StringBuffer] and [org/eclipse/swt/internal/win32/TCHAR] ---------------------------- When I tried to Google the error, it came out only on two spots on the entire web, that astonished me greatly. I am newbie using Proguard and Java code optimization tools at all. Any thoughts and suggestions how to fix this, will be appreciated. Thanks in advance.

    Read the article

  • How to load font from InputStream in SWT?

    - by parxier
    I need to load font (.otf or .ttf) file from java Resource or InputStream in SWT. org.eclipse.swt.graphics.Device.loadFont(String path) allows me (example) to load font from font file path (and it works), but there is no corresponding method to load it from any other source. I was thinking of using java.awt.Font.createFont(int fontFormat, InputStream fontStream) and then building org.eclipse.swt.graphics.FontData and org.eclipse.swt.graphics.Font objects out of AWT java.awt.Font object. Since I haven't tried that option yet (I don't even know if it works that way) I was just wondering if there are any other options available?

    Read the article

  • UIVIew layout and orientation changes

    - by Raja Marimuthu
    Hi, I am supporting all orientation for the iPad app. I am adjusting the my view with autoresizingMask for orienttaion changes (main view and tabbar) . But the subviews in the main view are flowing out of the mainview in landscape mode. so i forced a "setNeedsLayout for the mainview, making subviews in mainview to fit into the mainview boundary. But the issues is that subviews added in the lower part fo mainview are not responding to touches in landscape mode, but working fine with portrait mode. Any F1 ?

    Read the article

  • How do I suppress eclipse warning: Referenced identifier 'VIEWNAME:secondaryid' in attribute 'id' ca

    - by Jeremy
    In eclipse 3.4, here is the section of my plugin.xml: <extension point="org.eclipse.ui.views"> <view allowMultiple="true" class="the.full.class.name" icon="images/icon.gif" id="VIEWNAME" name="View Name"> </view> </extension> <extension point="org.eclipse.ui.perspectiveExtensions"> <perspectiveExtension targetID="com.p21csi.fps.sim.rcp.perspective.SimPerspective"> <view closeable="false" id="VIEWNAME:secondaryid" minimized="false" moveable="false" ratio=".75" relationship="left" relative="org.eclipse.ui.editorss" showTitle="true" standalone="true" visible="true"> </view> </perspectiveExtension> </extension> The application works fine, I just can't get rid of that annoying warning!

    Read the article

  • Linq To Sql - DataContext.SubmitChanges() problem

    - by Ahmet Altun
    I have a code like this. DBContext is Datacontext instance. try { TBLORGANISM org = new TBLORGANISM(); org.OrganismDesc = p.Subject; DBContext.TBLORGANISMs.InsertOnSubmit(org); DBContext.SubmitChanges(); } catch (Exception) { } At this point, I want to IGNORE the error and want to be skipped. Not to be retried. But when I try another insert like TBLACTION act = new TBLACTION(); act.ActionDesc = p.ActionName; DBContext.TBLACTIONs.InsertOnSubmit(act); DBContext.SubmitChanges(); SubmitChanges firstly retries previous attempt. How can I tell "skip errors, don't try again"?

    Read the article

  • Different log4j patterns on the same same file depending on log level?

    - by gorca
    Hi, I'd like for log events of type WARN or higher to show the class name. All others would not show the class name. this is both to simplify the log and to not have the performance hit on lower events such as TRACE. This must all go to the same log file. For example, right now, I have this on my log file: 2010-04-06 18:50:16,416 [main] INFO org.nyjord.lib.gather.TempMachine - initialised successfuly. 2010-04-06 18:50:16,416 [main] FATAL org.nyjord.lib.gather.TempMachine - not all paths could be located I would prefer this ON THE SAME FILE: 2010-04-06 18:50:16,416 [main] INFO - initialised successfuly. 2010-04-06 18:50:16,416 [main] FATAL org.nyjord.lib.gather.TempMachine - not all paths could be located Help would be really welcome.

    Read the article

  • How can I retain the editing in the search result with UISearchDisplayController?

    - by wal
    I want to search a table in editing mode, but when I type a letter the variable tableView.editing is changed to NO. And when I cancel the search, the tableView.editing return to YES. I'd like to retain the editing YES in the result of the search. I put breakpoint in several methods. Until the filterContentForSearchText method, the variable is YES, but in the cellForRowAtIndexPath, it is NO. Image of the table in editing mode just before type a letter in the search bar. Image of the search Image of the table after the searching (editing mode again)

    Read the article

  • ClassNotFoundException error in implementing Bayesian algorithm in Apache Mahout on Hadoop

    - by Shweta
    Hi, I have a problem in executing the Bayesian algorithm in Mahout. I built it with Maven and the job file is in target directory. When run from terminal using hadoop, I'm getting the ClassNotFoundException error. What should be done? $HADOOP_HOME/bin/hadoop jar mahout-core-0.3-SNAPSHOT.job org.apache.mahout.classifier.bayes.mapreduce.bayes.bayesdriver -i test -o output Exception in thread "main" java.lang.ClassNotFoundException: org.apache.mahout.classifier.bayes.mapreduce.bayes.bayesdriver at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

    Read the article

  • Size of static libraries generated by XCode

    - by shaft80
    I have a project tree in XCode that looks like this: AppProject depends on ObjcWrapper that in turn depends on PureCppLib. ObjcWrapper and PureCppLib are static library projects. Combined, all sources barely reach 15k lines of code, and, as expected, the size of resulting binary is about 750Kb in release mode and slightly over 1Mb in debug mode. So far, so good. However, ObjcWraper.a and PureCppLib.a are over 6Mb each in either mode. So the first question is why it is so. But more importantly, how can I ensure that those static libs do not include parts or all of the source code? Thanks in advance!

    Read the article

  • Java :Interface for this code

    - by ibrahim
    Please i neeed help to make interface for this code: package com.ejada.alinma.edh.xsdtransform; import java.io.File; import java.io.FileReader; import java.io.FileWriter; import java.io.StringWriter; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.Iterator; import java.util.Properties; import java.util.StringTokenizer; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Result; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; /*import org.apache.log4j.Logger;*/ import org.apache.log4j.PropertyConfigurator; import org.w3c.dom.Document; import org.w3c.dom.DocumentFragment; import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.NodeList; import com.sun.org.apache.xml.internal.serialize.OutputFormat; import com.sun.org.apache.xml.internal.serialize.XMLSerializer; /** * An XSD Transformer that replaces the "name" attribute's value in T24 XSDs * with the "shortname" attribute's value * * @author ahusseiny * */ public class XSDTransformer { /** * constants representing the XSD tags and attributes' names used in the parse process */ public static final String TAG_SCHEMA = "xsd:schema"; public static final String TAG_TEXT = "#text"; public static final String TAG_COMPLEX_TYPE = "xsd:complexType"; public static final String TAG_SIMPLE_TYPE = "xsd:simpleType"; public static final String TAG_SEQUENCE = "xsd:sequence"; public static final String TAG_ATTRIBUTE = "xsd:attribute"; public static final String TAG_ELEMENT = "xsd:element"; public static final String TAG_ANNOTATION = "xsd:annotation"; public static final String TAG_APP_INFO = "xsd:appinfo"; public static final String TAG_HAS_PROPERTY = "xsd:hasProperty"; public static final String TAG_RESTRICTION = "xsd:restriction"; public static final String TAG_MAX_LENGTH = "xsd:maxLength"; public static final String ATTR_NAME = "name"; public static final String ATTR_VALUE = "value"; public static final String ATTR_TYPE = "type"; public static final String ATTR_MIXED = "mixed"; public static final String ATTR_USE = "use"; public static final String ATTR_REF = "ref"; public static final String ATTR_MAX_OCCURS = "maxOccurs"; /** * constants representing specific XSD attributes' values used in the parse process */ public static final String FIELD_TAG = "fieldtag"; public static final String FIELD_NUMBER = "fieldnumber"; public static final String FIELD_DATA_TYPE = "fielddatatype"; public static final String FIELD_FMT = "fieldfmt"; public static final String FIELD_LEN = "fieldlen"; public static final String FIELD_INPUT_LEN = "fieldinputlen"; public static final String FIELD_GROUP_NUMBER = "fieldgroupnumber"; public static final String FIELD_MV_GROUP_NUMBER = "fieldmvgroupnumber"; public static final String FIELD_SHORT_NAME = "fieldshortname"; public static final String FIELD_NAME = "fieldname"; public static final String FIELD_COLUMN_NAME = "fieldcolumnname"; public static final String FIELD_GROUP_NAME = "fieldgroupname"; public static final String FIELD_MV_GROUP_NAME = "fieldmvgroupname"; public static final String FIELD_JUSTIFICATION = "fieldjustification"; public static final String FIELD_TYPE = "fieldtype"; public static final String FIELD_SINGLE_OR_MULTI = "singleormulti"; public static final String DELIMITER_COLUMN_TYPE = "#"; public static final String COLUMN_FK_ROW = "FK_ROW"; public static final String COLUMN_XPK_ROW = "XPK_ROW"; public static final int SQL_VIEW_MULTI = 1; public static final int SQL_VIEW_SINGLE = 2; public static final String DATA_TYPE_XSD_NUMERIC = "numeric"; public static final String DATA_TYPE_XSD_DECIMAL = "decimal"; public static final String DATA_TYPE_XSD_STRING = "string"; public static final String DATA_TYPE_XSD_DATE = "date"; /** * application configuration properties */ public static final String PROP_LOG4J_CONFIG_FILE = "log4j_config"; public static final String PROP_MAIN_VIEW_NAME_SINGLE = "view_name_single"; public static final String PROP_MAIN_VIEW_NAME_MULTI = "view_name_multi"; public static final String PROP_MAIN_TABLE_NAME = "main_edh_table_name"; public static final String PROP_SUB_TABLE_PREFIX = "sub_table_prefix"; public static final String PROP_SOURCE_XSD_FULLNAME = "source_xsd_fullname"; public static final String PROP_RESULTS_PATH = "results_path"; public static final String PROP_NEW_XSD_FILENAME = "new_xsd_filename"; public static final String PROP_CSV_FILENAME = "csv_filename"; /** * static holders for application-level utilities */ private static Properties appProps; private static Logger appLogger; /** * */ private StringBuffer sqlViewColumnsSingle = null; private StringBuffer sqlViewSelectSingle = null; private StringBuffer columnsCSV = null; private ArrayList<String> singleValueTableColumns = null; private HashMap<String, String> multiValueTablesSQL = null; private HashMap<Object, HashMap<String, Object>> groupAttrs = null; public XSDTransformer(String appConfigPropsPath) { if (appProps == null) { appProps = new Properties(); } try { init(appConfigPropsPath); } catch (Exception e) { appLogger.error(e.getMessage()); } } /** * initialization */ private void init(String appConfigPropsPath) throws Exception { // init the properties object FileReader in = new FileReader(appConfigPropsPath); appProps.load(in); // init the logger if ((appProps.getProperty(XSDTransformer.PROP_LOG4J_CONFIG_FILE) != null) && (!appProps.getProperty(XSDTransformer.PROP_LOG4J_CONFIG_FILE).equals(""))) { PropertyConfigurator.configure(appProps.getProperty(XSDTransformer.PROP_LOG4J_CONFIG_FILE)); if (appLogger == null) { appLogger = Logger.getLogger(XSDTransformer.class.getName()); } appLogger.info("Application initialization successful."); } sqlViewColumnsSingle = new StringBuffer(); sqlViewSelectSingle = new StringBuffer(); columnsCSV = new StringBuffer(XSDTransformer.FIELD_TAG + "," + XSDTransformer.FIELD_NUMBER + "," + XSDTransformer.FIELD_DATA_TYPE + "," + XSDTransformer.FIELD_FMT + "," + XSDTransformer.FIELD_LEN + "," + XSDTransformer.FIELD_INPUT_LEN + "," + XSDTransformer.FIELD_GROUP_NUMBER + "," + XSDTransformer.FIELD_MV_GROUP_NUMBER + "," + XSDTransformer.FIELD_SHORT_NAME + "," + XSDTransformer.FIELD_NAME + "," + XSDTransformer.FIELD_COLUMN_NAME + "," + XSDTransformer.FIELD_GROUP_NAME + "," + XSDTransformer.FIELD_MV_GROUP_NAME + "," + XSDTransformer.FIELD_JUSTIFICATION + "," + XSDTransformer.FIELD_TYPE + "," + XSDTransformer.FIELD_SINGLE_OR_MULTI + System.getProperty("line.separator")); singleValueTableColumns = new ArrayList<String>(); singleValueTableColumns.add(XSDTransformer.COLUMN_XPK_ROW + XSDTransformer.DELIMITER_COLUMN_TYPE + XSDTransformer.DATA_TYPE_XSD_NUMERIC); multiValueTablesSQL = new HashMap<String, String>(); groupAttrs = new HashMap<Object, HashMap<String, Object>>(); } /** * initialize the <code>DocumentBuilder</code> and read the XSD file * * @param docPath * @return the <code>Document</code> object representing the read XSD file */ private Document retrieveDoc(String docPath) { Document xsdDoc = null; File file = new File(docPath); try { DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); xsdDoc = builder.parse(file); } catch (Exception e) { appLogger.error(e.getMessage()); } return xsdDoc; } /** * perform the iteration/modification on the document * iterate to the level which contains all the elements (Single-Value, and Groups) and start processing each * * @param xsdDoc * @return */ private Document transformDoc(Document xsdDoc) { ArrayList<Object> newElementsList = new ArrayList<Object>(); HashMap<String, Object> docAttrMap = new HashMap<String, Object>(); Element sequenceElement = null; Element schemaElement = null; // get document's root element NodeList nodes = xsdDoc.getChildNodes(); for (int i = 0; i < nodes.getLength(); i++) { if (XSDTransformer.TAG_SCHEMA.equals(nodes.item(i).getNodeName())) { schemaElement = (Element) nodes.item(i); break; } } // process the document (change single-value elements, collect list of new elements to be added) for (int i1 = 0; i1 < schemaElement.getChildNodes().getLength(); i1++) { Node childLevel1 = (Node) schemaElement.getChildNodes().item(i1); // <ComplexType> element if (childLevel1.getNodeName().equals(XSDTransformer.TAG_COMPLEX_TYPE)) { // first, get the main attributes and put it in the csv file for (int i6 = 0; i6 < childLevel1.getChildNodes().getLength(); i6++) { Node child6 = childLevel1.getChildNodes().item(i6); if (XSDTransformer.TAG_ATTRIBUTE.equals(child6.getNodeName())) { if (child6.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { String attrName = child6.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).getNodeValue(); if (((Element) child6).getElementsByTagName(XSDTransformer.TAG_SIMPLE_TYPE).getLength() != 0) { Node simpleTypeElement = ((Element) child6).getElementsByTagName(XSDTransformer.TAG_SIMPLE_TYPE) .item(0); if (((Element) simpleTypeElement).getElementsByTagName(XSDTransformer.TAG_RESTRICTION).getLength() != 0) { Node restrictionElement = ((Element) simpleTypeElement).getElementsByTagName( XSDTransformer.TAG_RESTRICTION).item(0); if (((Element) restrictionElement).getElementsByTagName(XSDTransformer.TAG_MAX_LENGTH).getLength() != 0) { Node maxLengthElement = ((Element) restrictionElement).getElementsByTagName( XSDTransformer.TAG_MAX_LENGTH).item(0); HashMap<String, String> elementProperties = new HashMap<String, String>(); elementProperties.put(XSDTransformer.FIELD_TAG, attrName); elementProperties.put(XSDTransformer.FIELD_NUMBER, "0"); elementProperties.put(XSDTransformer.FIELD_DATA_TYPE, XSDTransformer.DATA_TYPE_XSD_STRING); elementProperties.put(XSDTransformer.FIELD_FMT, ""); elementProperties.put(XSDTransformer.FIELD_NAME, attrName); elementProperties.put(XSDTransformer.FIELD_SHORT_NAME, attrName); elementProperties.put(XSDTransformer.FIELD_COLUMN_NAME, attrName); elementProperties.put(XSDTransformer.FIELD_SINGLE_OR_MULTI, "S"); elementProperties.put(XSDTransformer.FIELD_LEN, maxLengthElement.getAttributes().getNamedItem( XSDTransformer.ATTR_VALUE).getNodeValue()); elementProperties.put(XSDTransformer.FIELD_INPUT_LEN, maxLengthElement.getAttributes() .getNamedItem(XSDTransformer.ATTR_VALUE).getNodeValue()); constructElementRow(elementProperties); // add the attribute as a column in the single-value table singleValueTableColumns.add(attrName + XSDTransformer.DELIMITER_COLUMN_TYPE + XSDTransformer.DATA_TYPE_XSD_STRING + XSDTransformer.DELIMITER_COLUMN_TYPE + maxLengthElement.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE).getNodeValue()); // add the attribute as a column in the single-values view sqlViewColumnsSingle.append(System.getProperty("line.separator") + attrName + ", "); sqlViewSelectSingle.append(System.getProperty("line.separator") + attrName + ", "); appLogger.debug("added attribute: " + attrName); } } } } } } // now, loop on the elements and process them for (int i2 = 0; i2 < childLevel1.getChildNodes().getLength(); i2++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(i2); // <Sequence> element if (childLevel2.getNodeName().equals(XSDTransformer.TAG_SEQUENCE)) { sequenceElement = (Element) childLevel2; for (int i3 = 0; i3 < childLevel2.getChildNodes().getLength(); i3++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(i3); // <Element> element if (childLevel3.getNodeName().equals(XSDTransformer.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { processGroup(childLevel3, true, null, docAttrMap, xsdDoc, newElementsList); // insert a new comment node with the contents of the group tag sequenceElement.insertBefore(xsdDoc.createComment(serialize(childLevel3)), childLevel3); // remove the group tag sequenceElement.removeChild(childLevel3); } else { processElement(childLevel3); } } } } } } } // add new elements // this step should be after finishing processing the whole document. when you add new elements to the document // while you are working on it, those new elements will be included in the processing. We don't need that! for (int i = 0; i < newElementsList.size(); i++) { sequenceElement.appendChild((Element) newElementsList.get(i)); } // write the new required attributes to the schema element Iterator<String> attrIter = docAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) docAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(XSDTransformer.TAG_ATTRIBUTE); appLogger.debug("appending attr. [" + attr.getAttribute(XSDTransformer.ATTR_NAME) + "]..."); newAttrElement.setAttribute(XSDTransformer.ATTR_NAME, attr.getAttribute(XSDTransformer.ATTR_NAME)); newAttrElement.setAttribute(XSDTransformer.ATTR_TYPE, attr.getAttribute(XSDTransformer.ATTR_TYPE)); schemaElement.appendChild(newAttrElement); } return xsdDoc; } /** * check if the <code>element</code> sent is single-value element or group * element. the comparison depends on the children of the element. if found one of type * <code>ComplexType</code> then it's a group element, and if of type * <code>SimpleType</code> then it's a single-value element * * @param element * @return <code>true</code> if the element is a group element, * <code>false</code> otherwise */ private boolean isGroup(Node element) { for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node child = (Node) element.getChildNodes().item(i); if (child.getNodeName().equals(XSDTransformer.TAG_COMPLEX_TYPE)) { // found a ComplexType child (Group element) return true; } else if (child.getNodeName().equals(XSDTransformer.TAG_SIMPLE_TYPE)) { // found a SimpleType child (Single-Value element) return false; } } return false; /* String attrName = null; if (element.getAttributes() != null) { Node attribute = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); } } if (attrName.startsWith("g")) { // group element return true; } else { // single element return false; } */ } /** * process a group element. recursively, process groups till no more group elements are found * * @param element * @param isFirstLevelGroup * @param attrMap * @param docAttrMap * @param xsdDoc * @param newElementsList */ private void processGroup(Node element, boolean isFirstLevelGroup, Node parentGroup, HashMap<String, Object> docAttrMap, Document xsdDoc, ArrayList<Object> newElementsList) { String elementName = null; HashMap<String, Object> groupAttrMap = new HashMap<String, Object>(); HashMap<String, Object> parentGroupAttrMap = new HashMap<String, Object>(); if (element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).getNodeValue(); } appLogger.debug("processing group [" + elementName + "]..."); // get the attributes if a non-first-level-group // attributes are: groups's own attributes + parent group's attributes if (!isFirstLevelGroup) { // get the current element (group) attributes for (int i1 = 0; i1 < element.getChildNodes().getLength(); i1++) { if (XSDTransformer.TAG_COMPLEX_TYPE.equals(element.getChildNodes().item(i1).getNodeName())) { Node complexTypeNode = element.getChildNodes().item(i1); for (int i2 = 0; i2 < complexTypeNode.getChildNodes().getLength(); i2++) { if (XSDTransformer.TAG_ATTRIBUTE.equals(complexTypeNode.getChildNodes().item(i2).getNodeName())) { appLogger.debug("add group attr: " + ((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(XSDTransformer.ATTR_NAME)); groupAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(XSDTransformer.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); docAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(XSDTransformer.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); } } } } // now, get the parent's attributes parentGroupAttrMap = groupAttrs.get(parentGroup); if (parentGroupAttrMap != null) { Iterator<String> iter = parentGroupAttrMap.keySet().iterator(); while (iter.hasNext()) { String attrName = iter.next(); groupAttrMap.put(attrName, parentGroupAttrMap.get(attrName)); } } // put the attributes in the attributes map groupAttrs.put(element, groupAttrMap); } for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(XSDTransformer.TAG_COMPLEX_TYPE)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(XSDTransformer.TAG_SEQUENCE)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(XSDTransformer.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { // another group element.. // unfortunately, a recursion is // needed here!!! :-( processGroup(childLevel3, false, element, docAttrMap, xsdDoc, newElementsList); } else { // reached a single-value element.. copy it under the // main sequence and apply the name-shorname // replacement processGroupElement(childLevel3, element, isFirstLevelGroup, xsdDoc, newElementsList); } } } } } } } appLogger.debug("finished processing group [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. replace the <code>name</code> attribute with the <code>shortname</code>. * * @param element */ private void processElement(Node element) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); if (element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(XSDTransformer.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(XSDTransformer.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(XSDTransformer.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue()); if (attrName.equals(XSDTransformer.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } } } } } } } } } if (element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).setNodeValue(fieldShortName); } sqlViewColumnsSingle.append(System.getProperty("line.separator") + fieldColumnName + ", "); sqlViewSelectSingle.append(System.getProperty("line.separator") + fieldShortName + ", "); elementProperties.put(XSDTransformer.FIELD_SINGLE_OR_MULTI, "S"); constructElementRow(elementProperties); singleValueTableColumns.add(fieldShortName + XSDTransformer.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + XSDTransformer.DELIMITER_COLUMN_TYPE + fieldInputLength); appLogger.debug("finished processing element [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. copy the element under the main sequence * 2. replace the <code>name</code> attribute with the <code>shortname</code>. * 3. add the attributes of the parent groups (if non-first-level-group) * * @param element */ private void processGroupElement(Node element, Node parentGroup, boolean isFirstLevelGroup, Document xsdDoc, ArrayList<Object> newElementsList) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; Element newElement = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); ArrayList<String> tableColumns = new ArrayList<String>(); HashMap<String, Object> groupAttrMap = null; if (element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); // 1. copy the element newElement = (Element) element.cloneNode(true); newElement.setAttribute(XSDTransformer.ATTR_MAX_OCCURS, "unbounded"); // 2. if non-first-level-group, replace the element's SimpleType tag with a ComplexType tag if (!isFirstLevelGroup) { if (((Element) newElement).getElementsByTagName(XSDTransformer.TAG_SIMPLE_TYPE).getLength() != 0) { // there should be only one tag of SimpleType Node simpleTypeNode = ((Element) newElement).getElementsByTagName(XSDTransformer.TAG_SIMPLE_TYPE).item(0); // create the new ComplexType element Element complexTypeNode = xsdDoc.createElement(XSDTransformer.TAG_COMPLEX_TYPE); complexTypeNode.setAttribute(XSDTransformer.ATTR_MIXED, "true"); // get the list of attributes for the parent group groupAttrMap = groupAttrs.get(parentGroup); Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(XSDTransformer.TAG_ATTRIBUTE); appLogger.debug("adding attr. [" + attr.getAttribute(XSDTransformer.ATTR_NAME) + "]..."); newAttrElement.setAttribute(XSDTransformer.ATTR_REF, attr.getAttribute(XSDTransformer.ATTR_NAME)); newAttrElement.setAttribute(XSDTransformer.ATTR_USE, "optional"); complexTypeNode.appendChild(newAttrElement); } // replace the old SimpleType node with the new ComplexType node newElement.replaceChild(complexTypeNode, simpleTypeNode); } } // 3. replace the name with the shortname in the new element for (int i = 0; i < newElement.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) newElement.getChildNodes().item(i); if (childLevel1.getNodeName().equals(XSDTransformer.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(XSDTransformer.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(XSDTransformer.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue()); if (attrName.equals(XSDTransformer.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(XSDTransformer.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(XSDTransformer.ATTR_VALUE)

    Read the article

  • Why does this JSON fail only in iPhone?

    - by 4thSpace
    I'm using the JSON framework from http://code.google.com/p/json-framework. The JSON below fails with this error: -JSONValue failed. Error trace is: ( Error Domain=org.brautaset.JSON.ErrorDomain Code=5 UserInfo=0x124a20 "Unescaped control character '0xd'", Error Domain=org.brautaset.JSON.ErrorDomain Code=3 UserInfo=0x11bc20 "Object value expected for key: Phone", Error Domain=org.brautaset.JSON.ErrorDomain Code=3 UserInfo=0x1ac6e0 "Expected value while parsing array" ) JSON being parsed: [{"id" :"2422","name" :"BusinessA","address" :"7100 U.S. 50","lat" :"38.342945","lng" :"-90.390701","CityId" :"11","StateId" :"38","CategoryId" :"1","Phone" :"(200) 200-2000","zip" :"00010"}] I think 0xd represents a carriage. When I put the above JSON in TextWrangler, I don't see any carriage returns. I got the JSON by doing "po myjson" in the debugger. It passes this validator: http://json.parser.online.fr/. Can anyone see what the problem may be?

    Read the article

  • Exception while running Quartz Schdular program

    - by Sunny Mate
    hi, i am getting he following Exception while running my Quartz Schdular program. Below is the exception Trace Mar 26, 2010 2:54:24 PM org.quartz.core.QuartzScheduler start INFO: Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started. Exception in thread "main" java.lang.IllegalArgumentException: Job class must implement the Job interface. at org.quartz.JobDetail.setJobClass(JobDetail.java:291) at org.quartz.JobDetail.(JobDetail.java:138) at com.Quarrtz.RanchSchedule.main(RanchSchedule.java:18) i have included Quartz-1.7.2.jar and Quartz-all-1.7.2.jar in my class path along with commom-logging 1.1.jar and jdk 6 this is an example i have copy and pasted from JAVA RANCH http://www.javaranch.com/journal/200711/combining_spring_and_quartz.html First example in the above page any help pls thanx in advance Sunny Mate

    Read the article

  • Breakpoints are ignored when debugging in Visual Studio 2008 on 64-bit system

    - by Arnold Zokas
    I'm trying to debug an ASP.NET web application in this environment: Windows Server Standard 2008 SP2 x64 Single-core CPU 4GB RAM Visual Studio 2008 with Remote Debugger SP1 .NET 3.5 Web Application running in IIS in 64-bit mode The code I am trying to debug is a simple event handler with some basic sequential code. What I observe is that breakpoints get randomly ignored and VS often exits debug mode when I try to step through the code. The code is compiled in debug mode. I've tried all the usual steps: iisreset, reboot VM, rebuild solution, reinstall Visual Studio. Any ideas?

    Read the article

  • How to build gnu `libiconv` on & for windows?

    - by claws
    Hello, I want to build a static library (*.LIB file) GNU libiconv on windows to be used with other libraries in Visual C++. Other libraries I'm using are built with "MultiThreaded DLL" (/MD) Runtime option. So, I need to build libiconv with the same option. Problem is the libiconv uses GNU build system and I want to compile with /MD option. You can see the source structure of libiconv here: http://cvs.savannah.gnu.org/viewvc/libiconv/?root=libiconv Actually, Mr. Zlatkovic maintains the windows port of GNU libiconv for libxml2 you can see them here: ftp://xmlsoft.org/libxml2/win32/iconv-1.9.2.win32.zip ftp://xmlsoft.org/libxml2/win32/iconv-1.9.2.win32.zip I cannot use his port. I need to build from the latest version of libiconv-1.13. I wonder how this guy has ported it? Can some one please tell me how to build *.lib from this and compile it using MSVC?

    Read the article

  • simple jsf application in myeclipse using jBoss

    - by chetan
    I want to run simple application of jsf but after configuring jBoss in my application I got the following error. 14:58:38,328 ERROR [[/web3demo]] Exception sending context initialized event to listener instance of class org.jboss.web.jsf.integration.config.JBossJSFConfigureListener com.sun.faces.config.ConfigurationException: CONFIGURATION FAILED! Source Document: jndi:/localhost/web3demo/WEB-INF/faces-config.xml Cause: Class 'com.icesoft.faces.facelets.D2DFaceletViewHandler' is missing a runtime dependency: java.lang.NoClassDefFoundError: com/sun/facelets/impl/ResourceResolver DEPLOYMENTS IN ERROR: Deployment "vfsfile:/E:/ctn%20sodtware/jboss-5.0.1.GA/server/default/deploy/2832010.war/" is in error due to the following reason(s): org.xml.sax.SAXException: cvc-datatype-valid.1.2.1: '2832010' is not a valid value for 'NCName'. @ vfsfile:/E:/ctn%20sodtware/jboss-5.0.1.GA/server/default/deploy/2832010.war/WEB-INF/web.xml[5,16] Deployment "vfsfile:/E:/ctn%20sodtware/jboss-5.0.1.GA/server/default/deploy/web3demo.war/" is in error due to the following reason(s): org.jboss.deployers.spi.DeploymentException: URL file:/E:/ctn sodtware/jboss-5.0.1.GA/server/default/deploy/web3demo.war/ deployment failed This is simple application where I simply run default index file without creating any other jsf file.

    Read the article

  • UTF-8 formatting in SPARQL

    - by john
    How can I "say" to SPARQL that ?churchname is in UTF-8 formatting? because response is like:Pražský hrad PREFIX lgv: <http://linkedgeodata.org/vocabulary#> PREFIX abc: <http://dbpedia.org/class/yago/> SELECT ?churchname WHERE { <http://dbpedia.org/resource/Prague> geo:geometry ?gm . ?church a lgv:castle . ?church geo:geometry ?churchgeo . ?church lgv:name ?churchname . FILTER ( bif:st_intersects (?churchgeo,?gm, 10)) } GROUP BY ?churchname ORDER BY ?churchname

    Read the article

  • Problem using Hibernate-Search

    - by KCore
    Hi, I am using hibernate search for my application. It is well configured and running perfectly till some time back, when it stopped working suddenly. The reason according to me being the number of my model (bean) classes. I have some 90 classes, which I add to my configuration, while building my Hibernate Configuration. When, I disable hibernate search (remove the search annotations and use Configuration instead of AnnotationsConfiguration), I try to start my application, it Works fine. But,the same app when I enable search, it just hangs up. I tried debugging and found the exact place where it hangs. After adding all the class to my AnnotationsConfiguration object, when I say cfg.buildSessionfactory(), It never comes out of that statement. (I have waited for hours!!!) Also when I decrease the number of my model classes (like say to half i.e. 50) it comes out of that statement and the application works fine.. Can Someone tell why is this happening?? My versions of hibernate are: hibernate-core-3.3.1.GA.jar hibernate-annotations-3.4.0.GA.jar hibernate-commons-annotations-3.1.0.GA.jar hibernate-search-3.1.0.GA.jar Also if need to avoid using AnnotationsConfiguration, I read that I need to configure the search event listeners explicitly.. can anyone list all the neccessary listeners and their respective classes? (I tried the standard ones given in Hibernate Search books, but they give me ClassNotFound exception and I have all the neccesarty libs in classpath) Here are the last few lines of hibernate trace I managed to pull : 16:09:32,814 INFO AnnotationConfiguration:369 - Hibernate Validator not found: ignoring 16:09:32,892 INFO ConnectionProviderFactory:95 - Initializing connection provider: org.hibernate.connection.C3P0ConnectionProvider 16:09:32,895 INFO C3P0ConnectionProvider:103 - C3P0 using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/autolinkcrmcom_data 16:09:32,898 INFO C3P0ConnectionProvider:104 - Connection properties: {user=root, password=****} 16:09:32,900 INFO C3P0ConnectionProvider:107 - autocommit mode: false 16:09:33,694 INFO SettingsFactory:116 - RDBMS: MySQL, version: 5.1.37-1ubuntu5.1 16:09:33,696 INFO SettingsFactory:117 - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-3.1.10 ( $Date: 2005/05/19 15:52:23 $, $Revision: 1.1.2.2 $ ) 16:09:33,701 INFO Dialect:175 - Using dialect: org.hibernate.dialect.MySQLDialect 16:09:33,707 INFO TransactionFactoryFactory:59 - Using default transaction strategy (direct JDBC transactions) 16:09:33,709 INFO TransactionManagerLookupFactory:80 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 16:09:33,711 INFO SettingsFactory:170 - Automatic flush during beforeCompletion(): disabled 16:09:33,714 INFO SettingsFactory:174 - Automatic session close at end of transaction: disabled 16:09:32,814 INFO AnnotationConfiguration:369 - Hibernate Validator not found: ignoring 16:09:32,892 INFO ConnectionProviderFactory:95 - Initializing connection provider: org.hibernate.connection.C3P0ConnectionProvider 16:09:32,895 INFO C3P0ConnectionProvider:103 - C3P0 using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/autolinkcrmcom_data 16:09:32,898 INFO C3P0ConnectionProvider:104 - Connection properties: {user=root, password=****} 16:09:32,900 INFO C3P0ConnectionProvider:107 - autocommit mode: false 16:09:33,694 INFO SettingsFactory:116 - RDBMS: MySQL, version: 5.1.37-1ubuntu5.1 16:09:33,696 INFO SettingsFactory:117 - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-3.1.10 ( $Date: 2005/05/19 15:52:23 $, $Revision: 1.1.2.2 $ ) 16:09:33,701 INFO Dialect:175 - Using dialect: org.hibernate.dialect.MySQLDialect 16:09:33,707 INFO TransactionFactoryFactory:59 - Using default transaction strategy (direct JDBC transactions) 16:09:33,709 INFO TransactionManagerLookupFactory:80 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 16:09:33,711 INFO SettingsFactory:170 - Automatic flush during beforeCompletion(): disabled 16:09:33,714 INFO SettingsFactory:174 - Automatic session close at end of transaction: disabled 16:09:33,716 INFO SettingsFactory:181 - JDBC batch size: 15 16:09:33,719 INFO SettingsFactory:184 - JDBC batch updates for versioned data: disabled 16:09:33,721 INFO SettingsFactory:189 - Scrollable result sets: enabled 16:09:33,723 DEBUG SettingsFactory:193 - Wrap result sets: disabled 16:09:33,725 INFO SettingsFactory:197 - JDBC3 getGeneratedKeys(): enabled 16:09:33,727 INFO SettingsFactory:205 - Connection release mode: auto 16:09:33,730 INFO SettingsFactory:229 - Maximum outer join fetch depth: 2 16:09:33,732 INFO SettingsFactory:232 - Default batch fetch size: 1000 16:09:33,735 INFO SettingsFactory:236 - Generate SQL with comments: disabled 16:09:33,737 INFO SettingsFactory:240 - Order SQL updates by primary key: disabled 16:09:33,740 INFO SettingsFactory:244 - Order SQL inserts for batching: disabled 16:09:33,742 INFO SettingsFactory:420 - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 16:09:33,744 INFO ASTQueryTranslatorFactory:47 - Using ASTQueryTranslatorFactory 16:09:33,747 INFO SettingsFactory:252 - Query language substitutions: {} 16:09:33,750 INFO SettingsFactory:257 - JPA-QL strict compliance: disabled 16:09:33,752 INFO SettingsFactory:262 - Second-level cache: enabled 16:09:33,754 INFO SettingsFactory:266 - Query cache: disabled 16:09:33,757 INFO SettingsFactory:405 - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 16:09:33,759 INFO RegionFactoryCacheProviderBridge:61 - Cache provider: net.sf.ehcache.hibernate.EhCacheProvider 16:09:33,762 INFO SettingsFactory:276 - Optimize cache for minimal puts: disabled 16:09:33,764 INFO SettingsFactory:285 - Structured second-level cache entries: disabled 16:09:33,766 INFO SettingsFactory:314 - Statistics: disabled 16:09:33,769 INFO SettingsFactory:318 - Deleted entity synthetic identifier rollback: disabled 16:09:33,771 INFO SettingsFactory:333 - Default entity-mode: pojo 16:09:33,774 INFO SettingsFactory:337 - Named query checking : enabled 16:09:33,869 INFO Version:20 - Hibernate Search 3.1.0.GA 16:09:35,134 DEBUG DocumentBuilderIndexedEntity:157 - Field selection in projections is set to false for entity **com.xyz.abc**. recognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernateDocumentBuilderIndexedEntity Donno what the last line indicates ??? (hibernaterecognized....) After the last line it doesnt do anything (no trace too ) and just hangs....

    Read the article

  • How to find out memory layout of your data structure implementation on Linux 64bit machine

    - by ajay
    In this article, http://cacm.acm.org/magazines/2010/7/95061-youre-doing-it-wrong/fulltext the author talks about the memory layouts of 2 data structures - The Binary Heap and the B-Heap and compares how one has better memory layout than the other. http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f5.jpg http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f6.jpg I want to get hands on experience on this. I have an implementation of a N-Ary Tree and I want to find out the memory layout of my data structure. What is the best way to come up with a memory layout like the one in the article? Secondly, I think it is easier to identify the memory layout if it is an array based implementation. If the implementation of a Tree uses pointers then what Tools do we have or what kind of approach is required to map it's memory layout? Thanks!

    Read the article

  • XML sitemap 404 page and .htaccess issue (phpvibe)

    - by Lachit
    I figure out a xml sitemap for my website. In .htaccess, if I comment (RewriteRule ^(.*)/?$ index.php?rp=$1 [L]) line then all xml sitemap works fine, but my website pages doesn't work. Please check this links.... www.streaminghub.org/test/sitemap.php -- Working www.streaminghub.org/test/sitemap_1.xml --- working www.streaminghub.org/test/video/5439/samsung-nf110-english-hands-on-at-ifa-2010/ --- Not working My .htaccess file.... <IfModule mod_rewrite.c> RewriteEngine on RewriteBase /test/ RewriteRule ^embed/([^/]*)/$ /embed.php?id=$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d **RewriteRule ^(.*)/?$ index.php?rp=$1 [L]** **RewriteRule ^sitemaps.xml sitemap.php** **RewriteRule ^sitemap_(.*).xml?$ sitemap.php?page=$1 [L]** </IfModule> I think it's a .htaccess issue, I am using phpvibe script. I don't have knowledge with .htaccess, kindly help me to figure this issue. Thank you.

    Read the article

  • XslCompiledTransform eats my DOCTYPE!

    - by pukipuki
    Made a xslt template with output instruction for <DOCTYPE> <xsl:output media-type="text/html" method="html" encoding="windows-1251" doctype-public="-//W3C//DTD XHTML 1.1//EN" doctype-system="http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd" omit-xml-declaration="yes"/> In xsl-debug I'm receiving right <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> on top of the html. But when I'm using XslCompiledTransform etc.. output starts from <html xmlns="http://www.w3.org/1999/xhtml">. So, Doctype is missed. Why? Right properties of XslCompiledTransform initialized just like I set in . How can I get DOCTYPE from XslCompiledTransform?

    Read the article

  • Java type for date/time when using Oracle Date with Hibernate

    - by Marcus
    We have a Oracle Date column. At first in our Java/Hibernate class we were using java.sql.Date. This worked but it didn't seem to store any time information in the database when we save so I changed the Java data type to Timestamp. Now we get this error: springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.dao.an notation.PersistenceExceptionTranslationPostProcessor#0' defined in class path resource [margin-service-domain -config.xml]: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreatio nException: Error creating bean with name 'sessionFactory' defined in class path resource [m-service-doma in-config.xml]: Invocation of init method failed; nested exception is org.hibernate.HibernateException: Wrong column type: CREATE_TS, expected: timestamp Any ideas on how to map an Oracle Date while retaining the time portion? Update: I can get it to work if I use the Oracle Timestamp data type but I don't want that level of precision ideally. Just want the basic Oracle Date.

    Read the article

  • Copy task in Visual Studio 2008

    - by Maurizio Reginelli
    This is a question related to a previous question I posted (see here). I need to configure a .csproj file to copy some files from a directory to another one (let me call them SOURCE and DESTINATION). I also need to change the SOURCE path depending from the configuration I'm using. For example, if I compile my project in Debug mode, the SOURCE path must contain a subfolder called Debug. I tried the solution proposed by Schmitt in the previous post, using $(ConfigurationName) to set the dynamic directory into the SOURCE path. When I opened the solution containing that project, a list of links to the source files appeared in the main tree of the project and they were correctly related to the Debug mode. But when I changed to the Release mode, I saw that the path of the linked source files were set again to the Debug version. Is there a way to specify a parameter in the Include attribute of the SourceFiles element? Thank you.

    Read the article

  • Grails multiple databases

    - by srinath
    Hi, how can we write queries on 2 databases . I installed datasources plugin and domain classes are : class Organization { long id long company_id String name static mapping = { version false table 'organization_' id column : 'organizationId' company_id column : 'companyId' name column : 'name' } } class Assoc { Integer id Integer association_id Integer organization_id static mapping = { version false table 'assoc' id column : 'ASSOC_ID' association_id column : 'ASSOCIATION_ID' organization_id column : 'ORGANIZATION_ID' } } this is working : def org = Organization.list() def assoc = Assoc.list() and this is not working : def query = Organization.executeQuery("SELECT o.name as name, o.id FROM Organization o WHERE o.id IN (SELECT a.organization_id FROM Assoc a )") error : org.hibernate.hql.ast.QuerySyntaxException: Assoc is not mapped [SELECT o.name as name, o.id FROM org.com.domain.Organization o WHERE o.id IN (SELECT a.organization_id FROM AssocOrg a )] How can we connect with 2 databases using single query ? thanks in advance .

    Read the article

  • Eclipse + Git: Can't add files to my repository

    - by Stegeman
    I use Eclipse (Helios) with PDT and egit. I have a project without versioning, so I created a git repository for it by doing: Team -> Share Project When I try to add the files of my project to the repository: Team -> Add I get an exception: Failed to add resource to index Failed to add resource to index Exception caught during execution of add command When I add the files manually on the command line, everything is working fine. Any ideas? EDIT: The error eclipse gives is: Caused by: org.eclipse.jgit.errors.ObjectWritingException: Unable to create new object: Z:\eage_layout\.git\objects\60\f30dd232bd6ddaeb198fb11400c2613a072189 at org.eclipse.jgit.storage.file.ObjectDirectoryInserter.insert(ObjectDirectoryInserter.java:100) at org.eclipse.jgit.api.AddCommand.call(AddCommand.java:177) The code I'm running is located on a virtual machine running on CentOs. I'm working on a windows machine and using a samba share to get access to the code on the virtual machine. I've put the filesystem permissions on my .git directory to 777, but still it does not work.

    Read the article

  • MySQL "OR MATCH" hangs (long pause with no answer) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >