Search Results

Search found 11410 results on 457 pages for 'till death developer'.

Page 456/457 | < Previous Page | 452 453 454 455 456 457  | Next Page >

  • Branding Support for TopComponents

    - by Geertjan
    In yesterday's blog entry, you saw how a menu item can be created, in this case with the label "Brand", especially for Java classes that extend TopComponent: And, as you can see here, it's not about the name of the class, i.e., not because the class above is named "BlaTopComponent" because below the "Brand" men item is also available for the class named "Bla": Both the files BlaTopComponent.java and Bla.java have the "Brand" menu item available, because both extend the "org.openide.windows.TopComponent"  class, as shown yesterday. Now we continue by creating a new JPanel, with checkboxes for each part of a TopComponent that we consider to be brandable. In my case, this is the end result, at deployment, when the Brand menu item is clicked for the Bla class: When the user (who, in this case, is a developer) clicks OK, a constructor is created and the related client properties are added, depending on which of the checkboxes are clicked: public Bla() {     putClientProperty(TopComponent.PROP_SLIDING_DISABLED, false);     putClientProperty(TopComponent.PROP_UNDOCKING_DISABLED, true);     putClientProperty(TopComponent.PROP_MAXIMIZATION_DISABLED, false);     putClientProperty(TopComponent.PROP_CLOSING_DISABLED, true);     putClientProperty(TopComponent.PROP_DRAGGING_DISABLED, false); } At this point, no check is done to see whether a constructor already exists, nor whether the client properties are already available. That's for an upcoming blog entry! Right now, the constructor is always created, regardless of whether it already exists, and the client properties are always added. The key to all this is the 'actionPeformed' of the TopComponent, which was left empty yesterday. We start by creating a JDialog from the JPanel and we retrieve the selected state of the checkboxes defined in the JPanel: @Override public void actionPerformed(ActionEvent ev) {     String msg = dobj.getName() + " Branding";     final BrandTopComponentPanel brandTopComponentPanel = new BrandTopComponentPanel();     dd = new DialogDescriptor(brandTopComponentPanel, msg, true, new ActionListener() {         @Override         public void actionPerformed(ActionEvent e) {             Object result = dd.getValue();             if (DialogDescriptor.OK_OPTION == result) {                 isClosing = brandTopComponentPanel.getClosingCheckBox().isSelected();                 isDragging = brandTopComponentPanel.getDraggingCheckBox().isSelected();                 isMaximization = brandTopComponentPanel.getMaximizationCheckBox().isSelected();                 isSliding = brandTopComponentPanel.getSlidingCheckBox().isSelected();                 isUndocking = brandTopComponentPanel.getUndockingCheckBox().isSelected();                 JavaSource javaSource = JavaSource.forFileObject(dobj.getPrimaryFile());                 try {                     javaSource.runUserActionTask(new ScanTask(javaSource), true);                 } catch (IOException ex) {                     Exceptions.printStackTrace(ex);                 }             }         }     });     DialogDisplayer.getDefault().createDialog(dd).setVisible(true); } Then we start a scan process, which introduces the branding. We're already doing a scan process for identifying whether a class is a TopComponent. So, let's combine those two scans, branching out based on which one we're doing: private class ScanTask implements Task<CompilationController> {     private BrandTopComponentAction action = null;     private JavaSource js = null;     private ScanTask(JavaSource js) {         this.js = js;     }     private ScanTask(BrandTopComponentAction action) {         this.action = action;     }     @Override     public void run(final CompilationController info) throws Exception {         info.toPhase(Phase.ELEMENTS_RESOLVED);         if (action != null) {             new EnableIfTopComponentScanner(info, action).scan(                     info.getCompilationUnit(), null);         } else {             introduceBranding();         }     }     private void introduceBranding() throws IOException {         CancellableTask task = new CancellableTask<WorkingCopy>() {             @Override             public void run(WorkingCopy workingCopy) throws IOException {                 workingCopy.toPhase(Phase.RESOLVED);                 CompilationUnitTree cut = workingCopy.getCompilationUnit();                 TreeMaker treeMaker = workingCopy.getTreeMaker();                 for (Tree typeDecl : cut.getTypeDecls()) {                     if (Tree.Kind.CLASS == typeDecl.getKind()) {                         ClassTree clazz = (ClassTree) typeDecl;                         ModifiersTree methodModifiers = treeMaker.Modifiers(Collections.<Modifier>singleton(Modifier.PUBLIC));                         MethodTree newMethod =                                 treeMaker.Method(methodModifiers,                                 "<init>",                                 treeMaker.PrimitiveType(TypeKind.VOID),                                 Collections.<TypeParameterTree>emptyList(),                                 Collections.EMPTY_LIST,                                 Collections.<ExpressionTree>emptyList(),                                 "{ putClientProperty(TopComponent.PROP_SLIDING_DISABLED, " + isSliding + ");\n"+                                 "  putClientProperty(TopComponent.PROP_UNDOCKING_DISABLED, " + isUndocking + ");\n"+                                 "  putClientProperty(TopComponent.PROP_MAXIMIZATION_DISABLED, " + isMaximization + ");\n"+                                 "  putClientProperty(TopComponent.PROP_CLOSING_DISABLED, " + isClosing + ");\n"+                                 "  putClientProperty(TopComponent.PROP_DRAGGING_DISABLED, " + isDragging + "); }\n",                                 null);                         ClassTree modifiedClazz = treeMaker.addClassMember(clazz, newMethod);                         workingCopy.rewrite(clazz, modifiedClazz);                     }                 }             }             @Override             public void cancel() {             }         };         ModificationResult result = js.runModificationTask(task);         result.commit();     } } private static class EnableIfTopComponentScanner extends TreePathScanner<Void, Void> {     private CompilationInfo info;     private final AbstractAction action;     public EnableIfTopComponentScanner(CompilationInfo info, AbstractAction action) {         this.info = info;         this.action = action;     }     @Override     public Void visitClass(ClassTree t, Void v) {         Element el = info.getTrees().getElement(getCurrentPath());         if (el != null) {             TypeElement te = (TypeElement) el;             if (te.getSuperclass().toString().equals("org.openide.windows.TopComponent")) {                 action.setEnabled(true);             } else {                 action.setEnabled(false);             }         }         return null;     } }

    Read the article

  • Added splash screen code to my package

    - by Youssef
    Please i need support to added splash screen code to my package /* * T24_Transformer_FormView.java */ package t24_transformer_form; import org.jdesktop.application.Action; import org.jdesktop.application.ResourceMap; import org.jdesktop.application.SingleFrameApplication; import org.jdesktop.application.FrameView; import org.jdesktop.application.TaskMonitor; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.filechooser.FileNameExtensionFilter; import javax.swing.filechooser.FileFilter; // old T24 Transformer imports import java.io.File; import java.io.FileWriter; import java.io.StringWriter; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.Iterator; //import java.util.Properties; import java.util.StringTokenizer; import javax.swing.; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Result; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; import org.apache.log4j.Logger; import org.apache.log4j.PropertyConfigurator; import org.w3c.dom.Document; import org.w3c.dom.DocumentFragment; import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.NodeList; import com.ejada.alinma.edh.xsdtransform.util.ConfigKeys; import com.ejada.alinma.edh.xsdtransform.util.XSDElement; import com.sun.org.apache.xml.internal.serialize.OutputFormat; import com.sun.org.apache.xml.internal.serialize.XMLSerializer; /* * The application's main frame. */ public class T24_Transformer_FormView extends FrameView { /**} * static holders for application-level utilities * { */ //private static Properties appProps; private static Logger appLogger; /** * */ private StringBuffer columnsCSV = null; private ArrayList<String> singleValueTableColumns = null; private HashMap<String, String> multiValueTablesSQL = null; private HashMap<Object, HashMap<String, Object>> groupAttrs = null; private ArrayList<XSDElement> xsdElementsList = null; /** * initialization */ private void init() /*throws Exception*/ { // init the properties object //FileReader in = new FileReader(appConfigPropsPath); //appProps.load(in); // log4j.properties constant String PROP_LOG4J_CONFIG_FILE = "log4j.properties"; // init the logger if ((PROP_LOG4J_CONFIG_FILE != null) && (!PROP_LOG4J_CONFIG_FILE.equals(""))) { PropertyConfigurator.configure(PROP_LOG4J_CONFIG_FILE); if (appLogger == null) { appLogger = Logger.getLogger(T24_Transformer_FormView.class.getName()); } appLogger.info("Application initialization successful."); } columnsCSV = new StringBuffer(ConfigKeys.FIELD_TAG + "," + ConfigKeys.FIELD_NUMBER + "," + ConfigKeys.FIELD_DATA_TYPE + "," + ConfigKeys.FIELD_FMT + "," + ConfigKeys.FIELD_LEN + "," + ConfigKeys.FIELD_INPUT_LEN + "," + ConfigKeys.FIELD_GROUP_NUMBER + "," + ConfigKeys.FIELD_MV_GROUP_NUMBER + "," + ConfigKeys.FIELD_SHORT_NAME + "," + ConfigKeys.FIELD_NAME + "," + ConfigKeys.FIELD_COLUMN_NAME + "," + ConfigKeys.FIELD_GROUP_NAME + "," + ConfigKeys.FIELD_MV_GROUP_NAME + "," + ConfigKeys.FIELD_JUSTIFICATION + "," + ConfigKeys.FIELD_TYPE + "," + ConfigKeys.FIELD_SINGLE_OR_MULTI + System.getProperty("line.separator")); singleValueTableColumns = new ArrayList<String>(); singleValueTableColumns.add(ConfigKeys.COLUMN_XPK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); multiValueTablesSQL = new HashMap<String, String>(); groupAttrs = new HashMap<Object, HashMap<String, Object>>(); xsdElementsList = new ArrayList<XSDElement>(); } /** * initialize the <code>DocumentBuilder</code> and read the XSD file * * @param docPath * @return the <code>Document</code> object representing the read XSD file */ private Document retrieveDoc(String docPath) { Document xsdDoc = null; File file = new File(docPath); try { DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); xsdDoc = builder.parse(file); } catch (Exception e) { appLogger.error(e.getMessage()); } return xsdDoc; } /** * perform the iteration/modification on the document * iterate to the level which contains all the elements (Single-Value, and Groups) and start processing each * * @param xsdDoc * @return */ private Document processDoc(Document xsdDoc) { ArrayList<Object> newElementsList = new ArrayList<Object>(); HashMap<String, Object> docAttrMap = new HashMap<String, Object>(); Element sequenceElement = null; Element schemaElement = null; // get document's root element NodeList nodes = xsdDoc.getChildNodes(); for (int i = 0; i < nodes.getLength(); i++) { if (ConfigKeys.TAG_SCHEMA.equals(nodes.item(i).getNodeName())) { schemaElement = (Element) nodes.item(i); break; } } // process the document (change single-value elements, collect list of new elements to be added) for (int i1 = 0; i1 < schemaElement.getChildNodes().getLength(); i1++) { Node childLevel1 = (Node) schemaElement.getChildNodes().item(i1); // <ComplexType> element if (childLevel1.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { // first, get the main attributes and put it in the csv file for (int i6 = 0; i6 < childLevel1.getChildNodes().getLength(); i6++) { Node child6 = childLevel1.getChildNodes().item(i6); if (ConfigKeys.TAG_ATTRIBUTE.equals(child6.getNodeName())) { if (child6.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { String attrName = child6.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); if (((Element) child6).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).getLength() != 0) { Node simpleTypeElement = ((Element) child6).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE) .item(0); if (((Element) simpleTypeElement).getElementsByTagName(ConfigKeys.TAG_RESTRICTION).getLength() != 0) { Node restrictionElement = ((Element) simpleTypeElement).getElementsByTagName( ConfigKeys.TAG_RESTRICTION).item(0); if (((Element) restrictionElement).getElementsByTagName(ConfigKeys.TAG_MAX_LENGTH).getLength() != 0) { Node maxLengthElement = ((Element) restrictionElement).getElementsByTagName( ConfigKeys.TAG_MAX_LENGTH).item(0); HashMap<String, String> elementProperties = new HashMap<String, String>(); elementProperties.put(ConfigKeys.FIELD_TAG, attrName); elementProperties.put(ConfigKeys.FIELD_NUMBER, "0"); elementProperties.put(ConfigKeys.FIELD_DATA_TYPE, ConfigKeys.DATA_TYPE_XSD_STRING); elementProperties.put(ConfigKeys.FIELD_FMT, ""); elementProperties.put(ConfigKeys.FIELD_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_SHORT_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_COLUMN_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "S"); elementProperties.put(ConfigKeys.FIELD_LEN, maxLengthElement.getAttributes().getNamedItem( ConfigKeys.ATTR_VALUE).getNodeValue()); elementProperties.put(ConfigKeys.FIELD_INPUT_LEN, maxLengthElement.getAttributes() .getNamedItem(ConfigKeys.ATTR_VALUE).getNodeValue()); constructElementRow(elementProperties); // add the attribute as a column in the single-value table singleValueTableColumns.add(attrName + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_STRING + ConfigKeys.DELIMITER_COLUMN_TYPE + maxLengthElement.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE).getNodeValue()); // add the attribute as an element in the elements list addToElementsList(attrName, attrName); appLogger.debug("added attribute: " + attrName); } } } } } } // now, loop on the elements and process them for (int i2 = 0; i2 < childLevel1.getChildNodes().getLength(); i2++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(i2); // <Sequence> element if (childLevel2.getNodeName().equals(ConfigKeys.TAG_SEQUENCE)) { sequenceElement = (Element) childLevel2; for (int i3 = 0; i3 < childLevel2.getChildNodes().getLength(); i3++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(i3); // <Element> element if (childLevel3.getNodeName().equals(ConfigKeys.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { processGroup(childLevel3, true, null, null, docAttrMap, xsdDoc, newElementsList); // insert a new comment node with the contents of the group tag sequenceElement.insertBefore(xsdDoc.createComment(serialize(childLevel3)), childLevel3); // remove the group tag sequenceElement.removeChild(childLevel3); } else { processElement(childLevel3); } } } } } } } // add new elements // this step should be after finishing processing the whole document. when you add new elements to the document // while you are working on it, those new elements will be included in the processing. We don't need that! for (int i = 0; i < newElementsList.size(); i++) { sequenceElement.appendChild((Element) newElementsList.get(i)); } // write the new required attributes to the schema element Iterator<String> attrIter = docAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) docAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(ConfigKeys.TAG_ATTRIBUTE); appLogger.debug("appending attr. [" + attr.getAttribute(ConfigKeys.ATTR_NAME) + "]..."); newAttrElement.setAttribute(ConfigKeys.ATTR_NAME, attr.getAttribute(ConfigKeys.ATTR_NAME)); newAttrElement.setAttribute(ConfigKeys.ATTR_TYPE, attr.getAttribute(ConfigKeys.ATTR_TYPE)); schemaElement.appendChild(newAttrElement); } return xsdDoc; } /** * add a new <code>XSDElement</code> with the given <code>name</code> and <code>businessName</code> to * the elements list * * @param name * @param businessName */ private void addToElementsList(String name, String businessName) { xsdElementsList.add(new XSDElement(name, businessName)); } /** * add the given <code>XSDElement</code> to the elements list * * @param element */ private void addToElementsList(XSDElement element) { xsdElementsList.add(element); } /** * check if the <code>element</code> sent is single-value element or group * element. the comparison depends on the children of the element. if found one of type * <code>ComplexType</code> then it's a group element, and if of type * <code>SimpleType</code> then it's a single-value element * * @param element * @return <code>true</code> if the element is a group element, * <code>false</code> otherwise */ private boolean isGroup(Node element) { for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node child = (Node) element.getChildNodes().item(i); if (child.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { // found a ComplexType child (Group element) return true; } else if (child.getNodeName().equals(ConfigKeys.TAG_SIMPLE_TYPE)) { // found a SimpleType child (Single-Value element) return false; } } return false; /* String attrName = null; if (element.getAttributes() != null) { Node attribute = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); } } if (attrName.startsWith("g")) { // group element return true; } else { // single element return false; } */ } /** * process a group element. recursively, process groups till no more group elements are found * * @param element * @param isFirstLevelGroup * @param attrMap * @param docAttrMap * @param xsdDoc * @param newElementsList */ private void processGroup(Node element, boolean isFirstLevelGroup, Node parentGroup, XSDElement parentGroupElement, HashMap<String, Object> docAttrMap, Document xsdDoc, ArrayList<Object> newElementsList) { String elementName = null; HashMap<String, Object> groupAttrMap = new HashMap<String, Object>(); HashMap<String, Object> parentGroupAttrMap = new HashMap<String, Object>(); XSDElement groupElement = null; if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing group [" + elementName + "]..."); groupElement = new XSDElement(elementName, elementName); // get the attributes if a non-first-level-group // attributes are: groups's own attributes + parent group's attributes if (!isFirstLevelGroup) { // get the current element (group) attributes for (int i1 = 0; i1 < element.getChildNodes().getLength(); i1++) { if (ConfigKeys.TAG_COMPLEX_TYPE.equals(element.getChildNodes().item(i1).getNodeName())) { Node complexTypeNode = element.getChildNodes().item(i1); for (int i2 = 0; i2 < complexTypeNode.getChildNodes().getLength(); i2++) { if (ConfigKeys.TAG_ATTRIBUTE.equals(complexTypeNode.getChildNodes().item(i2).getNodeName())) { appLogger.debug("add group attr: " + ((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME)); groupAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); docAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); } } } } // now, get the parent's attributes parentGroupAttrMap = groupAttrs.get(parentGroup); if (parentGroupAttrMap != null) { Iterator<String> iter = parentGroupAttrMap.keySet().iterator(); while (iter.hasNext()) { String attrName = iter.next(); groupAttrMap.put(attrName, parentGroupAttrMap.get(attrName)); } } // add the attributes to the group element that will be added to the elements list Iterator<String> itr = groupAttrMap.keySet().iterator(); while(itr.hasNext()) { groupElement.addAttribute(itr.next()); } // put the attributes in the attributes map groupAttrs.put(element, groupAttrMap); } for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_SEQUENCE)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { // another group element.. // unfortunately, a recursion is // needed here!!! :-( processGroup(childLevel3, false, element, groupElement, docAttrMap, xsdDoc, newElementsList); } else { // reached a single-value element.. copy it under the // main sequence and apply the name<>shorname replacement processGroupElement(childLevel3, element, groupElement, isFirstLevelGroup, xsdDoc, newElementsList); } } } } } } } if (isFirstLevelGroup) { addToElementsList(groupElement); } else { parentGroupElement.addChild(groupElement); } appLogger.debug("finished processing group [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. replace the <code>name</code> attribute with the <code>shortname</code>. * * @param element */ private void processElement(Node element) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue()); if (attrName.equals(ConfigKeys.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } } } } } } } } } // replace the name attribute with the shortname if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).setNodeValue(fieldShortName); } elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "S"); constructElementRow(elementProperties); singleValueTableColumns.add(fieldShortName + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldInputLength); // add the element to elements list addToElementsList(fieldShortName, fieldColumnName); appLogger.debug("finished processing element [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. copy the element under the main sequence * 2. replace the <code>name</code> attribute with the <code>shortname</code>. * 3. add the attributes of the parent groups (if non-first-level-group) * * @param element */ private void processGroupElement(Node element, Node parentGroup, XSDElement parentGroupElement, boolean isFirstLevelGroup, Document xsdDoc, ArrayList<Object> newElementsList) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; Element newElement = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); ArrayList<String> tableColumns = new ArrayList<String>(); HashMap<String, Object> groupAttrMap = null; if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); // 1. copy the element newElement = (Element) element.cloneNode(true); newElement.setAttribute(ConfigKeys.ATTR_MAX_OCCURS, "unbounded"); // 2. if non-first-level-group, replace the element's SimpleType tag with a ComplexType tag if (!isFirstLevelGroup) { if (((Element) newElement).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).getLength() != 0) { // there should be only one tag of SimpleType Node simpleTypeNode = ((Element) newElement).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).item(0); // create the new ComplexType element Element complexTypeNode = xsdDoc.createElement(ConfigKeys.TAG_COMPLEX_TYPE); complexTypeNode.setAttribute(ConfigKeys.ATTR_MIXED, "true"); // get the list of attributes for the parent group groupAttrMap = groupAttrs.get(parentGroup); Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(ConfigKeys.TAG_ATTRIBUTE); appLogger.debug("adding attr. [" + attr.getAttribute(ConfigKeys.ATTR_NAME) + "]..."); newAttrElement.setAttribute(ConfigKeys.ATTR_REF, attr.getAttribute(ConfigKeys.ATTR_NAME)); newAttrElement.setAttribute(ConfigKeys.ATTR_USE, "optional"); complexTypeNode.appendChild(newAttrElement); } // replace the old SimpleType node with the new ComplexType node newElement.replaceChild(complexTypeNode, simpleTypeNode); } } // 3. replace the name with the shortname in the new element for (int i = 0; i < newElement.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) newElement.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue()); if (attrName.equals(ConfigKeys.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } } } } } } } } } if (newElement.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { newElement.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).setNodeValue(fieldShortName); } // 4. save the new element to be added to the sequence list newElementsList.add(newElement); elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "M"); constructElementRow(elementProperties); // create the MULTI-VALUE table // 0. Primary Key tableColumns.add(ConfigKeys.COLUMN_XPK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_STRING + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.COLUMN_XPK_ROW_LENGTH); // 1. foreign key tableColumns.add(ConfigKeys.COLUMN_FK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); // 2. field value tableColumns.add(fieldShortName + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldInputLength); // 3. attributes if (groupAttrMap != null) { Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while (attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); tableColumns.add(attr.getAttribute(ConfigKeys.ATTR_NAME) + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); } } multiValueTablesSQL.put(sub_table_prefix.getText() + fieldShortName, constructMultiValueTableSQL( sub_table_prefix.getText() + fieldShortName, tableColumns)); // add the element to it's parent group children parentGroupElement.addChild(new XSDElement(fieldShortName, fieldColumnName)); appLogger.debug("finished processing element [" + elementName + "]."); } /** * write resulted files * * @param xsdDoc * @param docPath */ private void writeResults(Document xsdDoc, String resultsDir, String newXSDFileName, String csvFileName) { String rsDir = resultsDir + File.separator + new SimpleDateFormat("yyyyMMdd-HHmm").format(new Date()); try { File resultsDirFile = new File(rsDir); if (!resultsDirFile.exists()) { resultsDirFile.mkdirs(); } // write the XSD doc appLogger.info("writing the transformed XSD..."); Source source = new DOMSource(xsdDoc); Result result = new StreamResult(rsDir + File.separator + newXSDFileName); Transformer xformer = TransformerFactory.newInstance().newTransformer(); // xformer.setOutputProperty("indent", "yes"); xformer.transform(source, result); appLogger.info("finished writing the transformed XSD."); // write the CSV columns file appLogger.info("writing the CSV file..."); FileWriter csvWriter = new FileWriter(rsDir + File.separator + csvFileName); csvWriter.write(columnsCSV.toString()); csvWriter.close(); appLogger.info("finished writing the CSV file."); // write the master single-value table appLogger.info("writing the creation script for master table (single-values)..."); FileWriter masterTableWriter = new FileWriter(rsDir + File.separator + main_edh_table_name.getText() + ".sql"); masterTableWriter.write(constructSingleValueTableSQL(main_edh_table_name.getText(), singleValueTableColumns)); masterTableWriter.close(); appLogger.info("finished writing the creation script for master table (single-values)."); // write the multi-value tables sql appLogger.info("writing the creation script for slave tables (multi-values)..."); Iterator<String> iter = multiValueTablesSQL.keySet().iterator(); while (iter.hasNext()) { String tableName = iter.next(); String sql = multiValueTablesSQL.get(tableName); FileWriter tableSQLWriter = new FileWriter(rsDir + File.separator + tableName + ".sql"); tableSQLWriter.write(sql); tableSQLWriter.close(); } appLogger.info("finished writing the creation script for slave tables (multi-values)."); // write the single-value view appLogger.info("writing the creation script for single-value selection view..."); FileWriter singleValueViewWriter = new FileWriter(rsDir + File.separator + view_name_single.getText() + ".sql"); singleValueViewWriter.write(constructViewSQL(ConfigKeys.SQL_VIEW_SINGLE)); singleValueViewWriter.close(); appLogger.info("finished writing the creation script for single-value selection view."); // debug for (int i = 0; i < xsdElementsList.size(); i++) { getMultiView(xsdElementsList.get(i)); /*// if (xsdElementsList.get(i).getAllDescendants() != null) { // for (int j = 0; j < xsdElementsList.get(i).getAllDescendants().size(); j++) { // appLogger.debug(main_edh_table_name.getText() + "." + ConfigKeys.COLUMN_XPK_ROW // + "=" + xsdElementsList.get(i).getAllDescendants().get(j).getName() + "." + ConfigKeys.COLUMN_FK_ROW); // } // } */ } } catch (Exception e) { appLogger.error(e.getMessage()); } } private String getMultiView(XSDElement element)

    Read the article

  • Why is iTunes starting and stopping play randomly, and how do I stop it?

    - by Chris R
    Since yesterday morning my copy of iTunes has been starting and stopping randomly. If iTunes is not running, then it opens and sometimes begins playing, other times sits idle. Eventually, after a random interval it will begin playing a song, and then stop, and so on... Needless to say, it's driving me mad. (Mac OSX, 10.6.3, on a new-ish (< 1 year old) 24" iMac) I've made five changes to my system that may or may not be connected to this: My office phone was replaced with a Linksys IP Phone, which necessitated a change to my networking; where previously my Mac was connected directly to the office network port, now it is connected through the phone. My network connection now uses auto link detection in lieu of forcing 100Mbit I unpaired my bluetooth headset. I removed the USB audio device associated with another headset. I upgraded to Safari 5. I don't use it as a primary browser, but it's often open to run web apps that I'm developing. All of these things happened in pretty close proximity to each other, so one or more of them may be the culprit. One other thing that may or may not be related; for some reason my built-in microphone is no longer picking up audio. It seems like this might be connected to the iTunes issue, because it happened around the same time. In terms of things that I've tried in order to solve this, I'm at a bit of a loss. I followed the instructions at http://developer.apple.com/mac/library/technotes/tn2004/tn2124.html#SECLAUNCHDLOGGING to enable detailed launchd logging to see if I could track down which process was asking iTunes to open (when it's not already open) but I wasn't able to make heads or tails of the output. I'm not even sure if I'm looking in the right place, to be honest; it actually acts like something is activating the application with AppleScript, but I have no processes running that are doing that, as far as I know. I'm running a few apps that have iTunes integration: Adium, iChat with Chax, Quicksilver. None of these have been changed lately, so I consider them low risks of causing this, but it's not impossible. Moreover, I'm not using any of those features intentionally. This is a snippet of launchd debug logging from around the time it just launched: 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) Dispatching kevent callback. 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) EVFILT_PROC event for job: 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x1004076f0 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) fork()ed 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Conceived 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Created PID 22197 anonymously by PPID 26 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Looking up per user launchd for UID: 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Per user launchd job found for UID: 505 10-06-09 9:14:29 AM com.apple.launchd[1] System: Looking up service com.apple.CoreServices.coreservicesd 10-06-09 9:14:29 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.CoreServices.coreservicesd 10-06-09 9:14:29 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_EXIT 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Dispatching kevent callback. 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) EVFILT_PROC event for job: 10-06-09 9:14:29 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100401720 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_EXIT 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22197]) Reaping 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Total rusage: utime 0.000000 stime 0.000000 maxrss 0 ixrss 0 idrss 0 isrss 0 minflt 0 majflt 0 nswap 0 inblock 0 oublock 0 msgsnd 0 msgrcv 0 nsignals 0 nvcsw 0 nivcsw 0 10-06-09 9:14:29 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Removed 10-06-09 9:14:30 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:30 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 22197 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR|EV_EOF|EV_ONESHOT fflags = NOTE_REAP 10-06-09 9:14:32 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:32 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:33 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:33 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 143 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Dispatching kevent callback. 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) EVFILT_PROC event for job: 10-06-09 9:14:33 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10041e9a0 data = 0x0 ident = 143 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) fork()ed 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.distributed_notifications.2 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.distributed_notifications.2 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.libinfo_v1 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.DirectoryService.membership_v1 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.CoreServices.coreservicesd 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.CoreServices.coreservicesd 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.SystemConfiguration.configd 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.SystemConfiguration.configd 10-06-09 9:14:33 AM com.apple.launchd[1] System: Looking up service com.apple.audio.coreaudiod 10-06-09 9:14:33 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.audio.coreaudiod 10-06-09 9:14:34 AM com.apple.launchd[1] System: Looking up service com.apple.system.logger 10-06-09 9:14:34 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.logger 10-06-09 9:14:35 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:35 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:35 AM com.apple.launchd[1] System: Looking up service com.apple.DiskArbitration.diskarbitrationd 10-06-09 9:14:35 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.DiskArbitration.diskarbitrationd 10-06-09 9:14:35 AM com.apple.launchd[1] System: Looking up service com.apple.system.logger 10-06-09 9:14:35 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.logger 10-06-09 9:14:36 AM com.apple.launchd[1] System: Looking up service com.apple.FSEvents 10-06-09 9:14:36 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.FSEvents 10-06-09 9:14:36 AM com.apple.launchd[1] System: Looking up service com.apple.SystemConfiguration.configd 10-06-09 9:14:36 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.SystemConfiguration.configd 10-06-09 9:14:38 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:38 AM com.apple.launchd[1] KEVENT[0]: udata = 0x10002b230 data = 0x30 ident = 5 filter = EVFILT_READ flags = EV_ADD|EV_RECEIPT fflags = 0x0 10-06-09 9:14:39 AM com.apple.launchd[1] Dispatching kevent... 10-06-09 9:14:39 AM com.apple.launchd[1] KEVENT[0]: udata = 0x100802000 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) Dispatching kevent callback. 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) EVFILT_PROC event for job: 10-06-09 9:14:39 AM com.apple.launchd[1] KEVENT[0]: udata = 0x1004076f0 data = 0x0 ident = 26 filter = EVFILT_PROC flags = EV_ADD|EV_RECEIPT|EV_CLEAR fflags = NOTE_FORK 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.coreservicesd[26]) fork()ed 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave) Conceived 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Created PID 22211 anonymously by PPID 26 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Looking up per user launchd for UID: 0 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Per user launchd job found for UID: 505 10-06-09 9:14:39 AM com.apple.launchd[1] System: Looking up service com.apple.system.notification_center 10-06-09 9:14:39 AM com.apple.launchd[1] (com.apple.launchd.peruser.505[143]) Mach service lookup: com.apple.system.notification_center 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Looking up per user launchd for UID: 0 10-06-09 9:14:39 AM com.apple.launchd[1] (0x100401720.anonymous.lssave[22211]) Per user launchd job found for UID: 505 10-06-09 9:14:39 AM com.apple.launchd[1] System: Looking up service com.apple.system.DirectoryService.libinfo_v1

    Read the article

  • How to tune down the Hyperic built-in postgresql database for a small setup

    - by Svish
    We are testing out Hyperic 4.5.1 in a quite small environment for now. Currently there are just 1-5 agents and there probably won't be any more than 10-15. When I run ps ax there are 20(!) postgres processes running. For a small setup like this, that can't be necessary, can it? I'm a software developer and don't have much experience with setting up servers and such though, so don't really know. Either way, what settings are appropriate for a small Hyperic setup like this? Current, default and untouched configuration file, hqdb/data/postgresql.conf: # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The '=' is optional.) White space may be used. Comments are introduced # with '#' anywhere on a line. The complete list of option names and # allowed values can be found in the PostgreSQL documentation. The # commented-out settings shown in this file represent the default values. # # Please note that re-commenting a setting is NOT sufficient to revert it # to the default value, unless you restart the server. # # Any option can also be given as a command line switch to the server, # e.g., 'postgres -c log_connections=on'. Some options can be changed at # run-time with the 'SET' SQL command. # # This file is read on server startup and when the server receives a # SIGHUP. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # settings, which are marked below, require a server shutdown and restart # to take effect. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units: ms = milliseconds s = seconds min = minutes h = hours d = days #--------------------------------------------------------------------------- # FILE LOCATIONS #--------------------------------------------------------------------------- # The default values of these variables are driven from the -D command line # switch or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 9432 # (change requires restart) max_connections = 100 # (change requires restart) # Note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. #superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # octal # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security & Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' # (change requires restart) #krb_srvname = 'postgres' # (change requires restart) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart) #krb_caseins_users = off # (change requires restart) # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 64MB # min 128kB or max_connections*16kB # (change requires restart) #temp_buffers = 8MB # min 800kB #max_prepared_transactions = 5 # can be 0 or more # (change requires restart) # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 2MB # min 64kB maintenance_work_mem = 32MB # min 1MB #max_stack_depth = 2MB # min 100kB # - Free Space Map - max_fsm_pages = 204800 # min max_fsm_relations*16, 6 bytes each # (change requires restart) #max_fsm_relations = 1000 # min 100, ~70 bytes each # (change requires restart) # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 # (change requires restart) #shared_preload_libraries = '' # (change requires restart) # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits # - Background writer - #bgwriter_delay = 200ms # 10-10000ms between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 64kB # min 32kB # (change requires restart) commit_delay = 100000 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - checkpoint_segments = 10 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 5min # range 30s-1h #checkpoint_warning = 30s # 0 is off # - Archiving - #archive_command = '' # command to use to archive a logfile segment #archive_timeout = 0 # force a logfile segment switch after this # many seconds; 0 is off #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - #enable_bitmapscan = on #enable_hashagg = on #enable_hashjoin = on #enable_indexscan = on #enable_mergejoin = on #enable_nestloop = on #enable_seqscan = on #enable_sort = on #enable_tidscan = on # - Planner Cost Constants - #seq_page_cost = 1.0 # measured on an arbitrary scale #random_page_cost = 4.0 # same scale as above #cpu_tuple_cost = 0.01 # same scale as above #cpu_index_tuple_cost = 0.005 # same scale as above #cpu_operator_cost = 0.0025 # same scale as above #effective_cache_size = 128MB # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - log_destination = 'stderr' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log # files # (change requires restart) # These are only used if redirect_stderr is on: log_directory = '../../logs' # Directory where log files are written # Can be absolute or relative to PGDATA log_filename = 'hqdb-%Y-%m-%d.log' # Log file name pattern. # Can include strftime() escapes #log_truncate_on_rotation = off # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1d # Automatic rotation of logfiles will # happen after that time. 0 to # disable. #log_rotation_size = 10MB # Automatic rotation of logfiles will # happen after that much log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = error # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # fatal # panic (effectively off) log_min_duration_statement = 10000 # -1 is disabled, 0 logs all statements # and their durations. #silent_mode = off # DO NOT USE without syslog or # redirect_stderr # (change requires restart) # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off #log_connections = off #log_disconnections = off #log_duration = off #log_line_prefix = '' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, ddl, mod, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Query/Index Statistics Collector - #stats_command_string = on #update_process_title = on stats_start_collector = on # needed for block or row stats # (change requires restart) stats_block_level = on stats_row_level = on stats_reset_on_server_start = off # (change requires restart) # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? # 'on' requires stats_start_collector # and stats_row_level to also be on #autovacuum_naptime = 1min # time between autovacuum runs #autovacuum_vacuum_threshold = 500 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 250 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before # vacuum #autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before # analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum # (change requires restart) #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovacuum, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovacuum, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '"$user",public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled #vacuum_freeze_min_age = 100000000 # - Locale and Formatting - datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #timezone_abbreviations = 'Default' # select the set of available timezone # abbreviations. Currently, there are # Default # Australia # India # However you can also create your own # file in share/timezonesets/. #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed lc_messages = 'C' # locale for system error message # strings lc_monetary = 'C' # locale for monetary formatting lc_numeric = 'C' # locale for number formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #local_preload_libraries = '' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1s #max_locks_per_transaction = 64 # min 10 # (change requires restart) # Note: each lock table slot uses ~270 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #array_nulls = on #backslash_quote = safe_encoding # on, off, or safe_encoding #default_with_oids = off #escape_string_warning = on #standard_conforming_strings = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names SELECT * FROM pg_stat_activity; datid | datname | procpid | usesysid | usename | current_query | waiting | query_start | backend_start | client_addr | client_port -------+---------+---------+----------+---------+---------------------------------+---------+-------------------------------+-------------------------------+-------------+------------- 16384 | hqdb | 3267 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.036781+01 | 2011-02-08 15:51:20.02413+01 | 127.0.0.1 | 47892 16384 | hqdb | 3268 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.050994+01 | 2011-02-08 15:51:20.047393+01 | 127.0.0.1 | 47893 16384 | hqdb | 3269 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.056661+01 | 2011-02-08 15:51:20.053201+01 | 127.0.0.1 | 47894 16384 | hqdb | 3271 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.062351+01 | 2011-02-08 15:51:20.058822+01 | 127.0.0.1 | 47895 16384 | hqdb | 3272 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.068328+01 | 2011-02-08 15:51:20.064517+01 | 127.0.0.1 | 47896 16384 | hqdb | 3273 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.07444+01 | 2011-02-08 15:51:20.070755+01 | 127.0.0.1 | 47897 16384 | hqdb | 3274 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.080941+01 | 2011-02-08 15:51:20.076983+01 | 127.0.0.1 | 47898 16384 | hqdb | 3275 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.08741+01 | 2011-02-08 15:51:20.083697+01 | 127.0.0.1 | 47899 16384 | hqdb | 3276 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:20.093597+01 | 2011-02-08 15:51:20.089977+01 | 127.0.0.1 | 47900 16384 | hqdb | 3277 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:20.133974+01 | 2011-02-08 15:51:20.096149+01 | 127.0.0.1 | 47901 16384 | hqdb | 3308 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:49:27.402197+01 | 2011-02-08 15:51:29.826321+01 | 127.0.0.1 | 47902 16384 | hqdb | 3309 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.572395+01 | 2011-02-08 15:51:29.865243+01 | 127.0.0.1 | 47903 16384 | hqdb | 3310 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.586273+01 | 2011-02-08 15:51:29.874346+01 | 127.0.0.1 | 47904 16384 | hqdb | 3311 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:03.024088+01 | 2011-02-08 15:51:29.883598+01 | 127.0.0.1 | 47905 16384 | hqdb | 3312 | 10 | hqadmin | <IDLE> in transaction | f | 2011-02-08 15:51:35.804457+01 | 2011-02-08 15:51:29.892925+01 | 127.0.0.1 | 47906 16384 | hqdb | 3418 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.580207+01 | 2011-02-08 15:51:55.56911+01 | 127.0.0.1 | 47910 16384 | hqdb | 3419 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.59781+01 | 2011-02-08 15:51:55.588609+01 | 127.0.0.1 | 47911 16384 | hqdb | 3422 | 10 | hqadmin | <IDLE> | f | 2011-02-09 10:10:02.668836+01 | 2011-02-08 15:51:55.603076+01 | 127.0.0.1 | 47914 16384 | hqdb | 3421 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.770427+01 | 2011-02-08 15:51:55.603086+01 | 127.0.0.1 | 47913 16384 | hqdb | 3420 | 10 | hqadmin | <IDLE> | f | 2011-02-08 15:51:55.680785+01 | 2011-02-08 15:51:55.637058+01 | 127.0.0.1 | 47912 16384 | hqdb | 18233 | 10 | hqadmin | SELECT * FROM pg_stat_activity; | f | 2011-02-09 10:49:29.688949+01 | 2011-02-09 10:48:13.031475+01 | | -1 (21 rows)

    Read the article

  • Error deploying Spring with JAX-WS application in Jboss 6 server

    - by arlahiru
    I'm getting this error when I deploying spring+JAX-WS application in jboss server 6.1.0. 09:14:38,175 ERROR [org.springframework.web.context.ContextLoader] Context initialization failed: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'com.sun.xml.ws.transport.http.servlet.SpringBinding#0' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Cannot create inner bean '(inner bean)' of type [org.jvnet.jax_ws_commons.spring.SpringService] while setting bean property 'service'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': FactoryBean threw exception on object creation; nested exception is java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:230) [:2.5.6] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:122) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1245) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1010) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:472) [:2.5.6] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) [:2.5.6] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) [:2.5.6] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) [:2.5.6] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:164) [:2.5.6] at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:429) [:2.5.6] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:728) [:2.5.6] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:380) [:2.5.6] at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:255) [:2.5.6] at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:199) [:2.5.6] at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:45) [:2.5.6] at org.apache.catalina.core.StandardContext.contextListenerStart(StandardContext.java:3369) [:6.1.0.Final] at org.apache.catalina.core.StandardContext.start(StandardContext.java:3828) [:6.1.0.Final] at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeployInternal(TomcatDeployment.java:294) [:6.1.0.Final] at org.jboss.web.tomcat.service.deployers.TomcatDeployment.performDeploy(TomcatDeployment.java:146) [:6.1.0.Final] at org.jboss.web.deployers.AbstractWarDeployment.start(AbstractWarDeployment.java:476) [:6.1.0.Final] at org.jboss.web.deployers.WebModule.startModule(WebModule.java:118) [:6.1.0.Final] at org.jboss.web.deployers.WebModule.start(WebModule.java:95) [:6.1.0.Final] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [:1.7.0_05] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) [:1.7.0_05] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [:1.7.0_05] at java.lang.reflect.Method.invoke(Method.java:601) [:1.7.0_05] at org.jboss.mx.interceptor.ReflectedDispatcher.invoke(ReflectedDispatcher.java:157) [:6.0.0.GA] at org.jboss.mx.server.Invocation.dispatch(Invocation.java:96) [:6.0.0.GA] at org.jboss.mx.server.Invocation.invoke(Invocation.java:88) [:6.0.0.GA] at org.jboss.mx.server.AbstractMBeanInvoker.invoke(AbstractMBeanInvoker.java:271) [:6.0.0.GA] at org.jboss.mx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:670) [:6.0.0.GA] at org.jboss.system.microcontainer.ServiceProxy.invoke(ServiceProxy.java:206) [:2.2.0.SP2] at $Proxy41.start(Unknown Source) at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:53) [:2.2.0.SP2] at org.jboss.system.microcontainer.StartStopLifecycleAction.installAction(StartStopLifecycleAction.java:41) [:2.2.0.SP2] at org.jboss.dependency.plugins.action.SimpleControllerContextAction.simpleInstallAction(SimpleControllerContextAction.java:62) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.action.AccessControllerContextAction.install(AccessControllerContextAction.java:71) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractControllerContextActions.install(AbstractControllerContextActions.java:51) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:379) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.system.microcontainer.ServiceControllerContext.install(ServiceControllerContext.java:301) [:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2044) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1083) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1322) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1246) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1139) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:939) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:654) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.system.ServiceController.doChange(ServiceController.java:671) [:6.1.0.Final (Build SVNTag:JBoss_6.1.0.Final date: 20110816)] at org.jboss.system.ServiceController.start(ServiceController.java:443) [:6.1.0.Final (Build SVNTag:JBoss_6.1.0.Final date: 20110816)] at org.jboss.system.deployers.ServiceDeployer.start(ServiceDeployer.java:189) [:6.1.0.Final] at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:102) [:6.1.0.Final] at org.jboss.system.deployers.ServiceDeployer.deploy(ServiceDeployer.java:49) [:6.1.0.Final] at org.jboss.deployers.spi.deployer.helpers.AbstractSimpleRealDeployer.internalDeploy(AbstractSimpleRealDeployer.java:63) [:2.2.2.GA] at org.jboss.deployers.spi.deployer.helpers.AbstractRealDeployer.deploy(AbstractRealDeployer.java:55) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployerWrapper.deploy(DeployerWrapper.java:179) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doDeploy(DeployersImpl.java:1832) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1550) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.doInstallParentFirst(DeployersImpl.java:1571) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.install(DeployersImpl.java:1491) [:2.2.2.GA] at org.jboss.dependency.plugins.AbstractControllerContext.install(AbstractControllerContext.java:379) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.install(AbstractController.java:2044) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.incrementState(AbstractController.java:1083) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.executeOrIncrementStateDirectly(AbstractController.java:1322) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1246) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.resolveContexts(AbstractController.java:1139) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:939) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.dependency.plugins.AbstractController.change(AbstractController.java:654) [jboss-dependency.jar:2.2.0.SP2] at org.jboss.deployers.plugins.deployers.DeployersImpl.change(DeployersImpl.java:1983) [:2.2.2.GA] at org.jboss.deployers.plugins.deployers.DeployersImpl.process(DeployersImpl.java:1076) [:2.2.2.GA] at org.jboss.deployers.plugins.main.MainDeployerImpl.process(MainDeployerImpl.java:679) [:2.2.2.GA] at org.jboss.system.server.profileservice.deployers.MainDeployerPlugin.process(MainDeployerPlugin.java:106) [:6.1.0.Final] at org.jboss.profileservice.dependency.ProfileControllerContext$DelegateDeployer.process(ProfileControllerContext.java:143) [:0.2.2] at org.jboss.profileservice.plugins.deploy.actions.DeploymentStartAction.doPrepare(DeploymentStartAction.java:98) [:0.2.2] at org.jboss.profileservice.management.actions.AbstractTwoPhaseModificationAction.prepare(AbstractTwoPhaseModificationAction.java:101) [:0.2.2] at org.jboss.profileservice.management.ModificationSession.prepare(ModificationSession.java:87) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.internalPerfom(AbstractActionController.java:234) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.performWrite(AbstractActionController.java:213) [:0.2.2] at org.jboss.profileservice.management.AbstractActionController.perform(AbstractActionController.java:150) [:0.2.2] at org.jboss.profileservice.plugins.deploy.AbstractDeployHandler.startDeployments(AbstractDeployHandler.java:168) [:0.2.2] at org.jboss.profileservice.management.upload.remoting.DeployHandlerDelegate.startDeployments(DeployHandlerDelegate.java:74) [:6.1.0.Final] at org.jboss.profileservice.management.upload.remoting.DeployHandler.invoke(DeployHandler.java:156) [:6.1.0.Final] at org.jboss.remoting.ServerInvoker.invoke(ServerInvoker.java:967) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.completeInvocation(ServerThread.java:791) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.processInvocation(ServerThread.java:744) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.dorun(ServerThread.java:548) [:6.1.0.Final] at org.jboss.remoting.transport.socket.ServerThread.run(ServerThread.java:234) [:6.1.0.Final] Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name '(inner bean)': FactoryBean threw exception on object creation; nested exception is java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at org.springframework.beans.factory.support.FactoryBeanRegistrySupport$1.run(FactoryBeanRegistrySupport.java:127) [:2.5.6] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:116) [:2.5.6] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:98) [:2.5.6] at org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveInnerBean(BeanDefinitionValueResolver.java:223) [:2.5.6] ... 89 more Caused by: java.lang.LinkageError: loader constraint violation: when resolving field "DATETIME" the class loader (instance of org/jboss/classloader/spi/base/BaseClassLoader) of the referring class, javax/xml/datatype/DatatypeConstants, and the class loader (instance of <bootloader>) for the field's resolved type, loader constraint violation: when resolving field "DATETIME" the class loader at com.sun.xml.bind.v2.model.impl.RuntimeBuiltinLeafInfoImpl.<clinit>(RuntimeBuiltinLeafInfoImpl.java:263) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeTypeInfoSetImpl.<init>(RuntimeTypeInfoSetImpl.java:65) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:133) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.createTypeInfoSet(RuntimeModelBuilder.java:85) [:2.2] at com.sun.xml.bind.v2.model.impl.ModelBuilder.<init>(ModelBuilder.java:156) [:2.2] at com.sun.xml.bind.v2.model.impl.RuntimeModelBuilder.<init>(RuntimeModelBuilder.java:93) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:473) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl.<init>(JAXBContextImpl.java:319) [:2.2] at com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) [:2.2] at com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:188) [:2.2] at com.sun.xml.bind.api.JAXBRIContext.newInstance(JAXBRIContext.java:111) [:2.2] at com.sun.xml.ws.developer.JAXBContextFactory$1.createJAXBContext(JAXBContextFactory.java:113) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:166) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl$1.run(AbstractSEIModelImpl.java:159) [:2.2.3] at java.security.AccessController.doPrivileged(Native Method) [:1.7.0_05] at com.sun.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:158) [:2.2.3] at com.sun.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:99) [:2.2.3] at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:250) [:2.2.3] at com.sun.xml.ws.server.EndpointFactory.createSEIModel(EndpointFactory.java:343) [:2.2.3] at com.sun.xml.ws.server.EndpointFactory.createEndpoint(EndpointFactory.java:205) [:2.2.3] at com.sun.xml.ws.api.server.WSEndpoint.create(WSEndpoint.java:513) [:2.2.3] at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:333) [:] at org.jvnet.jax_ws_commons.spring.SpringService.getObject(SpringService.java:45) [:] at org.springframework.beans.factory.support.FactoryBeanRegistrySupport$1.run(FactoryBeanRegistrySupport.java:121) [:2.5.6] ... 93 more *** DEPLOYMENTS IN ERROR: Name -> Error vfs:///usr/jboss/jboss-6.1.0.Final/server/default/deploy/SpringWS.war -> org.jboss.deployers.spi.DeploymentException: URL file:/usr/jboss/jboss-6.1.0.Final/server/default/tmp/vfs/automountb159aa6e8c1b8582/SpringWS.war-4ec4d0151b4c7d7/ deployment failed DEPLOYMENTS IN ERROR: Deployment "vfs:///usr/jboss/jboss-6.1.0.Final/server/default/deploy/SpringWS.war" is in error due to the following reason(s): org.jboss.deployers.spi.DeploymentException: URL file:/usr/jboss/jboss-6.1.0.Final/server/default/tmp/vfs/automountb159aa6e8c1b8582/SpringWS.war-4ec4d0151b4c7d7/ deployment failed But this application is workinng correctly in glassfish 3.x server and web service is up and running. I'm using Netbeans IDE in Ubuntu 12.04 to build and deploy the application and I couldn't figure out what is the issue here. I guess its about spring and jboss because its working in glassfish smoothly. Please help me to solve this issue. Thanks all in advance.

    Read the article

  • 26 Days: Countdown to Oracle OpenWorld 2012

    - by Michael Snow
    Welcome to our countdown to Oracle OpenWorld! Oracle OpenWorld 2012 is just around the corner. In less than 26 days, San Francisco will be invaded by an expected 50,000 people from all over the world. Here on the Oracle WebCenter team, we’ve all been working to help make the experience a great one for all our WebCenter customers. For a sneak peak  – we’ll be spending this week giving you a teaser of what to look forward to if you are joining us in San Francisco from September 30th through October 4th. We have Oracle WebCenter sessions covering all topics imaginable. Take a look and use the tools we provide to build out your schedule in advance and reserve your seats in your favorite sessions.  That gives you plenty of time to plan for your week with us in San Francisco. If unfortunately, your boss denied your request to attend - there are still some ways that you can join in the experience virtually On-Demand. This year - we are expanding even more up North of Market Street and will be taking over Union Square as well. Check out this map of San Francisco to get a sense of how much of a footprint Oracle OpenWorld has grown to this year. With so much to see and so many sessions to learn from - its no wonder that people get excited. Add to that a good mix of fun and all of the possible WebCenter sessions you could attend - you won't want to sleep at all to take full advantage of such an opportunity. We'll also have our annual WebCenter Customer Appreciation reception - stay tuned this week for some more info on registration to make sure you'll be able to join us. If you've been following the America's Cup at all and believe in EXTREME PERFORMANCE you'll definitely want to take a look at this video from last year's OpenWorld Keynote. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Important OpenWorld Links:  Attendee / Presenters Toolkit Oracle Schedule Builder WebCenter Sessions (listed in the catalog under Fusion Middleware as "Portals, Sites, Content, and Collaboration" ) Oracle Music Festival - AMAZING Line up!!  Oracle Customer Appreciation Night -LOOK HERE!! Oracle OpenWorld LIVE On-Demand Here are all the WebCenter sessions broken down by day for your viewing pleasure. Monday, October 1st CON8885 - Simplify CRM Engagement with Contextual Collaboration Are your sales teams disconnected and disengaged? Do you want a tool for easily connecting expertise across your organization and providing visibility into the complete sales process? Do you want a way to enhance and retain organization knowledge? Oracle Social Network is the answer. Attend this session to learn how to make CRM easy, effective, and efficient for use across virtual sales teams. Also learn how Oracle Social Network can drive sales force collaboration with natural conversations throughout the sales cycle, promote sales team productivity through purposeful social networking without the noise, and build cross-team knowledge by integrating conversations with CRM and other business applications. CON8268 - Oracle WebCenter Strategy: Engaging Your Customers. Empowering Your Business Oracle WebCenter is a user engagement platform for social business, connecting people and information. Attend this session to learn about the Oracle WebCenter strategy, and understand where Oracle is taking the platform to help companies engage customers, empower employees, and enable partners. Business success starts with ensuring that everyone is engaged with the right people and the right information and can access what they need through the channel of their choice—Web, mobile, or social. Are you giving customers, employees, and partners the best-possible experience? Come learn how you can! ¶ HOL10208 - Add Social Capabilities to Your Enterprise Applications Oracle Social Network enables you to add real-time collaboration capabilities into your enterprise applications, so that conversations can happen directly within your business systems. In this hands-on lab, you will try out the Oracle Social Network product to collaborate with other attendees, using real-time conversations with document sharing capabilities. Next you will embed social capabilities into a sample Web-based enterprise application, using embedded UI components. Experts will also write simple REST-based integrations, using the Oracle Social Network API to programmatically create social interactions. ¶ CON8893 - Improve Employee Productivity with Intuitive and Social Work Environments Social technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. Forward-thinking organizations today need technologies and infrastructures to help them advance to the next level and integrate social activities with business applications to deliver a user experience that simplifies business processes and enterprise application engagement. Attend this session to hear from an innovative Oracle Social Network customer and learn how you can improve productivity with intuitive and social work environments and empower your employees with innovative social tools to enable contextual access to content and dynamic personalization of solutions. ¶ CON8270 - Oracle WebCenter Content Strategy and Vision Oracle WebCenter provides a strategic content infrastructure for managing documents, images, e-mails, and rich media files. With a single repository, organizations can address any content use case, such as accounts payable, HR onboarding, document management, compliance, records management, digital asset management, or Website management. In this session, learn about future plans for how Oracle WebCenter will address new use cases as well as new integrations with Oracle Fusion Middleware and Oracle Applications, leveraging your investments by making your users more productive and error-free. ¶ CON8269 - Oracle WebCenter Sites Strategy and Vision Oracle’s Web experience management solution, Oracle WebCenter Sites, enables organizations to use the online channel to drive customer acquisition and brand loyalty. It helps marketers and business users easily create and manage contextually relevant, social, interactive online experiences across multiple channels on a global scale. In this session, learn about future plans for how Oracle WebCenter Sites will provide you with the tools, capabilities, and integrations you need in order to continue to address your customers’ evolving requirements for engaging online experiences and keep moving your business forward. ¶ CON8896 - Living with SharePoint SharePoint is a popular platform, but it’s not always the best fit for Oracle customers. In this session, you’ll discover the technical and nontechnical limitations and pitfalls of SharePoint and learn about Oracle alternatives for collaboration, portals, enterprise and Web content management, social computing, and application integration. The presentation shows you how to integrate with SharePoint when business or IT requirements dictate and covers cloud-based (Office 365) and on-premises versions of SharePoint. Presented by a former Microsoft director of SharePoint product management and backed by independent customer research, this session will prepare you to answer the question “Why don’t we just use SharePoint for that?’ the next time it comes up in your organization. ¶ CON7843 - Content-Enabling Enterprise Processes with Oracle WebCenter Organizations today continually strive to automate business processes, reduce costs, and improve efficiency. Many business processes are content-intensive and unstructured, requiring ad hoc collaboration, and distributed in nature, requiring many approvals and generating huge volumes of paper. In this session, learn how Oracle and SYSTIME have partnered to help a customer content-enable its enterprise with Oracle WebCenter Content and Oracle WebCenter Imaging 11g and integrate them with Oracle Applications. ¶ CON6114 - Tape Robotics’ Newest Superhero: Now Fueled by Oracle Software For small, midsize, and rapidly growing businesses that want the most energy-efficient, scalable storage infrastructure to meet their rapidly growing data demands, Oracle’s most recent addition to its award-winning tape portfolio leverages several pieces of Oracle software. With Oracle Linux, Oracle WebLogic, and Oracle Fusion Middleware tools, the library achieves a higher level of usability than previous products while offering customers a familiar interface for management, plus ease of use. This session examines the competitive advantages of the tape library and how Oracle software raises customer satisfaction. Learn how the combination of Oracle engineered systems, Oracle Secure Backup, and Oracle’s StorageTek tape libraries provide end-to-end coverage of your data. ¶ CON9437 - Mobile Access Management With more than five billion mobile devices on the planet and an increasing number of users using their own devices to access corporate data and applications, securely extending identity management to mobile devices has become a hot topic. This session focuses on how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON7815 - Customer Experience Online in Cloud: Oracle WebCenter Sites, Oracle ATG Apps, Oracle Exalogic Oracle WebCenter Sites and Oracle’s ATG product line together can provide a compelling marketing and e-commerce experience. When you couple them with the extreme performance of Oracle Exalogic, you’ll see unmatched scalability that provides you with a true cloud-based solution. In this session, you’ll learn how running Oracle WebCenter Sites and ATG applications on Oracle Exalogic delivers both a private and a public cloud experience. Find out what it takes to get these systems working together and delivering engaging Web experiences. Even if you aren’t considering Oracle Exalogic today, the rich Web experience of Oracle WebCenter, paired with the depth of the ATG product line, can provide your business full support, from merchandising through sale completion. ¶ CON8271 - Oracle WebCenter Portal Strategy and Vision To innovate and keep a competitive edge, organizations need to leverage the power of agile and responsive Web applications. Oracle WebCenter Portal enables you to do just that, by delivering intuitive user experiences for enterprise applications to drive innovation with composite applications and mashups. Attend this session to learn firsthand from customers how Oracle WebCenter Portal extends the value of existing enterprise applications, business processes, and content; delivers a superior business user experience; and maximizes limited IT resources. ¶ CON8880 - The Connected Customer Experience Begins with the Online Channel There’s a lot of talk these days about how to connect the customer journey across various touchpoints—from Websites and e-commerce to call centers and in-store—to provide experiences that are more relevant and engaging and ultimately gain competitive edge. Doing it all at once isn’t a realistic objective, so where do you start? Come to this session, and hear about three steps you can take that can help you begin your journey toward delivering the connected customer experience. You’ll hear how Oracle now has an integrated digital marketing platform for your corporate Website, your e-commerce site, your self-service portal, and your marketing and loyalty campaigns, and you’ll learn what you can do today to begin executing on your customer experience initiatives. ¶ GEN11451 - General Session: Building Mobile Applications with Oracle Cloud With the prevalence of smart mobile devices, companies are facing an increased demand to provide access to data and applications from new channels. However, developing applications for mobile devices poses some unique challenges. Come to this session to learn how Oracle addresses these challenges, offering a simpler way to develop and deploy cross-device mobile applications. See how Oracle Cloud enables you to access applications, data, and services from mobile channels in an easier way.  CON8272 - Oracle Social Network Strategy and Vision One key way of increasing employee productivity is by bringing people, processes, and information together—providing new social capabilities to enable business users to quickly correspond and collaborate on business activities. Oracle WebCenter provides a user engagement platform with social and collaborative technologies to empower business users to focus on their key business processes, applications, and content in the context of their role and process. Attend this session to hear how the latest social capabilities in Oracle Social Network are enabling organizations to transform themselves into social businesses.  --- Tuesday, October 2nd HOL10194 - Enterprise Content Management Simplified: Oracle WebCenter Content’s Next-Generation UI Regardless of the nature of your business, unstructured content underpins many of its daily functions. Whether you are working with traditional presentations, spreadsheets, or text documents—or even with digital assets such as images and multimedia files—your content needs to be accessible and manageable in convenient and intuitive ways to make working with the content easier. Additionally, you need the ability to easily share documents with coworkers to facilitate a collaborative working environment. Come to this session to see how Oracle WebCenter Content’s next-generation user interface helps modern knowledge workers easily manage personal and enterprise documents in a collaborative environment.¶ CON8877 - Develop a Mobile Strategy with Oracle WebCenter: Engage Customers, Employees, and Partners Mobile technology has gone from nice-to-have to a cornerstone of user engagement. Mobile access enables users to have information available at their fingertips, enabling them to take action the moment they make a decision, interact in the moment of convenience, and take advantage of new service offerings in their preferred channels. All your employees have your mobile applications in their pocket; now what are you going to do? It is a critical step for companies to think through what their employees, customers, and partners really need on their devices. Attend this session to see how Oracle WebCenter enables you to better engage your customers, employees, and partners by providing a unified experience across multiple channels. ¶ CON9447 - Enabling Access for Hundreds of Millions of Users How do you grow your business by identifying, authenticating, authorizing, and federating users on the Web, leveraging social identity and the open source OAuth protocol? How do you scale your access management solution to support hundreds of millions of users? With social identity support out of the box, Oracle’s access management solution is also benchmarked for 250-million-user deployment according to real-world customer scenarios. In this session, you will learn about the social identity capability and the 250-million-user benchmark testing of Oracle Access Manager and Oracle Adaptive Access Manager running on Oracle Exalogic and Oracle Exadata. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON2906 - Get Proactive: Best Practices for Maintaining Oracle Fusion Middleware You chose Oracle Fusion Middleware products to help your organization deliver superior business results. Now learn how to take full advantage of your software with all the great tools, resources, and product updates you’re entitled to through Oracle Support. In this session, Oracle product experts provide proven best practices to help you work more efficiently, plan and prepare for upgrades and patching more effectively, and manage risk. Topics include configuration management tools, remote diagnostics, My Oracle Support Community, and My Oracle Support Lifecycle Advisors. New users and Oracle Fusion Middleware experts alike are guaranteed to leave with fresh ideas and practical, easy-to-implement next steps. ¶ CON8878 - Oracle WebCenter’s Cloud Strategy: From Social and Platform Services to Mashups Cloud computing represents a paradigm shift in how we build applications, automate processes, collaborate, and share and in how we secure our enterprise. Additionally, as you adopt cloud-based services in your organization, it’s likely that you will still have many critical on-premises applications running. With these mixed environments, multiple user interfaces, different security, and multiple datasources and content sources, how do you start evolving your strategy to account for these challenges? Oracle WebCenter offers a complete array of technologies enabling you to solve these challenges and prepare you for the cloud. Attend this session to learn how you can use Oracle WebCenter in the cloud as well as create on-premises and cloud application mash-ups. ¶ CON8901 - Optimize Enterprise Business Processes with Oracle WebCenter and Oracle BPM Do you have business processes that span multiple applications? Are you grappling with how to have visibility across these business processes; how to manage content that is associated with these processes; and, most importantly, how to model and optimize these business processes? Attend this session to hear how Oracle WebCenter and Oracle Business Process Management provide a unique set of integrated solutions to provide a composite application dashboard across these business processes and offer a solution for content-centric business processes. ¶ CON8883 - Deliver Engaging Interfaces to Oracle Applications with Oracle WebCenter Critical business processes live within enterprise applications, and application users need to manage and execute these processes as effectively as possible. Oracle provides a comprehensive user engagement platform to increase user productivity and optimize overall processes within Oracle Applications—Oracle E-Business Suite and Oracle’s Siebel, PeopleSoft, and JD Edwards product families—and third-party applications. Attend this session to learn how you can integrate these applications with Oracle WebCenter to deliver composite application dashboards to your end users—whether they are your customers, partners, or employees—for enhanced usability and Web 2.0–enabled enterprise portals.¶ Wednesday, October 3rd CON8895 - Future-Ready Intranets: How Aramark Re-engineered the Application Landscape There are essential techniques and technologies you can use to deliver employee portals that garner higher productivity, improve business efficiency, and increase user engagement. Attend this session to learn how you can leverage Oracle WebCenter Portal as a user engagement platform for bringing together business process management, enterprise content management, and business intelligence into a highly relevant and integrated experience. Hear how Aramark has leveraged Oracle WebCenter Portal and Oracle WebCenter Content to deliver a unified workspace providing simpler navigation and processing, consolidation of tools, easy access to information, integrated search, and single sign-on. ¶ CON8886 - Content Consolidation: Save Money, Increase Efficiency, and Eliminate Silos Organizations are looking for ways to save money and be more efficient. With content in many different places, it’s difficult to know where to look for a document and whether the document is the most current version. With Oracle WebCenter, content can be consolidated into one best-of-breed repository that is secure, scalable, and integrated with your business processes and applications. Users can find the content they need, where they need it, and ensure that it is the right content. This session covers content challenges that affect your business; content consolidation that can lead to savings in storage and administration costs and can lower risks; and how companies are realizing savings. ¶ CON8911 - Improve Online Experiences for Customers and Partners with Self-Service Portals Are you able to provide your customers and partners an easy-to-use online self-service experience? Are you processing high-volume transactions and struggling with call center bottlenecks or back-end systems that won’t integrate, causing order delays and customer frustration? Are you looking to target content such as product and service offerings to your end users? This session shares approaches to providing targeted delivery as well as strategies and best practices for transforming your business by providing an intuitive user experience for your customers and partners. ¶ CON6156 - Top 10 Ways to Integrate Oracle WebCenter Content This session covers 10 common ways to integrate Oracle WebCenter Content with other enterprise applications and middleware. It discusses out-of-the-box modules that provide expanded features in Oracle WebCenter Content—such as enterprise search, SOA, and BPEL—as well as developer tools you can use to create custom integrations. The presentation also gives guidance on which integration option may work best in your environment. ¶ HOL10207 - Build an Intranet Portal with Oracle WebCenter In this hands-on lab, you’ll work with Oracle WebCenter Portal and Oracle WebCenter Content to build out an enterprise portal that maximizes the productivity of teams and individual contributors. Using browser-based tools, you’ll manage site resources such as page styles, templates, and navigation. You’ll edit content stored in Oracle WebCenter Content directly from your portal. You’ll also experience the latest features that promote collaboration, social networking, and personal productivity. ¶ CON7817 - Migration to Oracle WebCenter Imaging 11g Customers today continually strive to automate business processes, reduce costs, and improve efficiency. The accounts payable process—which is often distributed in nature, requires many approvals, and generates huge volumes of paper invoices—is automated by many customers. In this session, learn how Oracle and SYSTIME have partnered to help a customer migrate its existing Oracle Imaging and Process Management Release 7.6 to the latest Oracle WebCenter Imaging 11g and integrate it with Oracle’s JD Edwards family of products. ¶ CON8910 - How to Engage Customers Across Web, Mobile, and Social Channels Whether on desktops at the office, on tablets at home, or on mobile phones when on the go, today’s customers are always connected. To engage today’s customers, you need to make the online customer experience connected and consistent across a host of devices and multiple channels, including Web, mobile, and social networks. Managing this multichannel environment can result in lots of headaches without the right tools. Attend this session to learn how Oracle WebCenter Sites solves the challenge of multichannel customer engagement. ¶ HOL10206 - Oracle WebCenter Sites 11g: Transforming the Content Contributor Experience Oracle WebCenter Sites 11g makes it easy for marketers and business users to contribute to and manage Websites with the new visual, contextual, and intuitive Web authoring interface. In this hands-on lab, you will create and manage content for a sports-themed Website, using many of the new and enhanced features of the 11g release. ¶ CON8900 - Building Next-Generation Portals: An Interactive Customer Panel Discussion Social and collaborative technologies have changed how people interact, learn, and collaborate, and providing a modern, social Web presence is imperative to remain competitive in today’s market. Can your business benefit from a more collaborative and interactive portal environment for employees, customers, and partners? Attend this session to hear from Oracle WebCenter Portal customers as they share their strategies and best practices for providing users with a modern experience that adapts to their needs and includes personalized access to content in context. The panel also addresses how customers have benefited from creating next-generation portals by migrating from older portal technologies to Oracle WebCenter Portal. ¶ CON9625 - Taking Control of Oracle WebCenter Security Organizations are increasingly looking to extend their Oracle WebCenter portal for social business, to serve external users and provide seamless access to the right information. In particular, many organizations are extending Oracle WebCenter in a business-to-business scenario requiring secure identification and authorization of business partners and their users. This session focuses on how customers are leveraging, securing, and providing access control to Oracle WebCenter portal and mobile solutions. You will learn best practices and hear real-world examples of how to provide flexible and granular access control for Oracle WebCenter deployments, using Oracle Platform Security Services and Oracle Access Management Suite product offerings. ¶ CON8891 - Extending Social into Enterprise Applications and Business Processes Oracle Social Network is an extensible social platform that enables contextual collaboration within enterprise applications and business processes, providing relevant data from across various enterprise systems in one place. Attend this session to see how an Oracle Social Network customer is integrating multiple applications—such as CRM, HCM, and business processes—into Oracle Social Network and Oracle WebCenter to enable individuals and teams to solve complex cross-organizational business problems more effectively by utilizing the social enterprise. ¶ Thursday, October 4th CON8899 - Becoming a Social Business: Stories from the Front Lines of Change What does it really mean to be a social business? How can you change our organization to embrace social approaches? What pitfalls do you need to avoid? In this lively panel discussion, customer and industry thought leaders in social business explore these topics and more as they share their stories of the good, the bad, and the ugly that can happen when embracing social methods and technologies to improve business success. Using moderated questions and open Q&A from the audience, the panel discusses vital topics such as the critical factors for success, the major issues to avoid, how to gain senior executive support for social efforts, how to handle undesired behavior, and how to measure business impact. It takes a thought-provoking look at becoming a social business from the inside. ¶ CON6851 - Oracle WebCenter and Oracle Business Intelligence Enterprise Edition to Create Vendor Portals Large manufacturers of grocery items routinely find themselves depending on the inventory management expertise of their wholesalers and distributors. Inventory costs can be managed more efficiently by the manufacturers if they have better insight into the inventory levels of items carried by their distributors. This creates a unique opportunity for distributors and wholesalers to leverage this knowledge into a revenue-generating subscription service. Oracle Business Intelligence Enterprise Edition and Oracle WebCenter Portal play a key part in enabling creation of business-managed business intelligence portals for vendors. This session discusses one customer that implemented this by leveraging Oracle WebCenter and Oracle Business Intelligence Enterprise Edition. ¶ CON8879 - Provide a Personalized and Consistent Customer Experience in Your Websites and Portals Your customers engage with your company online in different ways throughout their journey—from prospecting by acquiring information on your corporate Website to transacting through self-service applications on your customer portal—and then the cycle begins again when they look for new products and services. Ensuring that the customer experience is consistent and personalized across online properties—from branding and content to interactions and transactions—can be a daunting task. Oracle WebCenter enables you to speak and interact with your customers with one voice across your Websites and portals by providing an integrated platform for delivery of self-service and engagement that unifies and personalizes the online experience. Learn more in this session. ¶ CON8898 - Land Mines, Potholes, and Dirt Roads: Navigating the Way to ECM Nirvana Ten years ago, people were predicting that by this time in history, we’d be some kind of utopian paperless society. As we all know, we’re not there yet, but are we getting closer? What is keeping companies from driving down the road to enterprise content management bliss? Most people understand that using ECM as a central platform enables organizations to expedite document-centric processes, but most business processes in organizations are still heavily paper-based. Many of these processes could be automated and improved with an ECM platform infrastructure. In this panel discussion, you’ll hear from Oracle WebCenter customers that have already solved some of these challenges as they share their strategies for success and roads to avoid along your journey. ¶ CON8908 - Oracle WebCenter Portal: Creating and Using Content Presenter Templates Oracle WebCenter Portal applications use task flows to display and integrate content stored in the Oracle WebCenter Content server. Among the most flexible task flows is Content Presenter, which renders various types of content on an Oracle WebCenter Portal page. Although Oracle WebCenter Portal comes with a set of predefined Content Presenter templates, developers can create their own templates for specific rendering needs. This session shows the lifecycle of developing Content Presenter task flows, including how to create, package, import, modify at runtime, and use such templates. In addition to simple examples with Oracle Application Development Framework (Oracle ADF) UI elements to render the content, it shows how to use other UI technologies, CSS files, and JavaScript libraries. ¶ CON8897 - Using Web Experience Management to Drive Online Marketing Success Every year, the online channel becomes more imperative for driving organizational top-line revenue, but for many companies, mastering how to best market their products and services in a fast-evolving online world with high customer expectations for personalized experiences can be a complex proposition. Come to this panel discussion, and hear directly from online marketers how they are succeeding today by using Web experience management to drive marketing success, using capabilities such as targeting and optimization, user-generated content, mobile site publishing, and site visitor personalization to deliver engaging online experiences. ¶ CON8892 - Oracle’s Journey to Social Business Social business is a revolution, one that is causing rapidly accelerating change in how companies and customers engage with one another and how employees work together. Oracle’s goal in becoming a social business is to create a socially connected organization in which working collaboratively across geographical locations, lines of business, and management chains is second nature, enabling innovative solutions to business challenges. We can achieve this by connecting the right people, finding the right content, communicating with the right people, collaborating at the right time, and building the right communities in the right context—all ready in the CLOUD. Attend this session to see how Oracle is transforming itself into a social business. ¶  ------------ If you've read all the way to the end here - we are REALLY looking forward to seeing you in San Francisco.

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "\n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Introduction to the ASP.NET Web API

    - by Stephen.Walther
    I am a huge fan of Ajax. If you want to create a great experience for the users of your website – regardless of whether you are building an ASP.NET MVC or an ASP.NET Web Forms site — then you need to use Ajax. Otherwise, you are just being cruel to your customers. We use Ajax extensively in several of the ASP.NET applications that my company, Superexpert.com, builds. We expose data from the server as JSON and use jQuery to retrieve and update that data from the browser. One challenge, when building an ASP.NET website, is deciding on which technology to use to expose JSON data from the server. For example, how do you expose a list of products from the server as JSON so you can retrieve the list of products with jQuery? You have a number of options (too many options) including ASMX Web services, WCF Web Services, ASHX Generic Handlers, WCF Data Services, and MVC controller actions. Fortunately, the world has just been simplified. With the release of ASP.NET 4 Beta, Microsoft has introduced a new technology for exposing JSON from the server named the ASP.NET Web API. You can use the ASP.NET Web API with both ASP.NET MVC and ASP.NET Web Forms applications. The goal of this blog post is to provide you with a brief overview of the features of the new ASP.NET Web API. You learn how to use the ASP.NET Web API to retrieve, insert, update, and delete database records with jQuery. We also discuss how you can perform form validation when using the Web API and use OData when using the Web API. Creating an ASP.NET Web API Controller The ASP.NET Web API exposes JSON data through a new type of controller called an API controller. You can add an API controller to an existing ASP.NET MVC 4 project through the standard Add Controller dialog box. Right-click your Controllers folder and select Add, Controller. In the dialog box, name your controller MovieController and select the Empty API controller template: A brand new API controller looks like this: using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { } } An API controller, unlike a standard MVC controller, derives from the base ApiController class instead of the base Controller class. Using jQuery to Retrieve, Insert, Update, and Delete Data Let’s create an Ajaxified Movie Database application. We’ll retrieve, insert, update, and delete movies using jQuery with the MovieController which we just created. Our Movie model class looks like this: namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } public string Title { get; set; } public string Director { get; set; } } } Our application will consist of a single HTML page named Movies.html. We’ll place all of our jQuery code in the Movies.html page. Getting a Single Record with the ASP.NET Web API To support retrieving a single movie from the server, we need to add a Get method to our API controller: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public Movie GetMovie(int id) { // Return movie by id if (id == 1) { return new Movie { Id = 1, Title = "Star Wars", Director = "Lucas" }; } // Otherwise, movie was not found throw new HttpResponseException(HttpStatusCode.NotFound); } } } In the code above, the GetMovie() method accepts the Id of a movie. If the Id has the value 1 then the method returns the movie Star Wars. Otherwise, the method throws an exception and returns 404 Not Found HTTP status code. After building your project, you can invoke the MovieController.GetMovie() method by entering the following URL in your web browser address bar: http://localhost:[port]/api/movie/1 (You’ll need to enter the correct randomly generated port). In the URL api/movie/1, the first “api” segment indicates that this is a Web API route. The “movie” segment indicates that the MovieController should be invoked. You do not specify the name of the action. Instead, the HTTP method used to make the request – GET, POST, PUT, DELETE — is used to identify the action to invoke. The ASP.NET Web API uses different routing conventions than normal ASP.NET MVC controllers. When you make an HTTP GET request then any API controller method with a name that starts with “GET” is invoked. So, we could have called our API controller action GetPopcorn() instead of GetMovie() and it would still be invoked by the URL api/movie/1. The default route for the Web API is defined in the Global.asax file and it looks like this: routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); We can invoke our GetMovie() controller action with the jQuery code in the following HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Get Movie</title> </head> <body> <div> Title: <span id="title"></span> </div> <div> Director: <span id="director"></span> </div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> getMovie(1, function (movie) { $("#title").html(movie.Title); $("#director").html(movie.Director); }); function getMovie(id, callback) { $.ajax({ url: "/api/Movie", data: { id: id }, type: "GET", contentType: "application/json;charset=utf-8", statusCode: { 200: function (movie) { callback(movie); }, 404: function () { alert("Not Found!"); } } }); } </script> </body> </html> In the code above, the jQuery $.ajax() method is used to invoke the GetMovie() method. Notice that the Ajax call handles two HTTP response codes. When the GetMove() method successfully returns a movie, the method returns a 200 status code. In that case, the details of the movie are displayed in the HTML page. Otherwise, if the movie is not found, the GetMovie() method returns a 404 status code. In that case, the page simply displays an alert box indicating that the movie was not found (hopefully, you would implement something more graceful in an actual application). You can use your browser’s Developer Tools to see what is going on in the background when you open the HTML page (hit F12 in the most recent version of most browsers). For example, you can use the Network tab in Google Chrome to see the Ajax request which invokes the GetMovie() method: Getting a Set of Records with the ASP.NET Web API Let’s modify our Movie API controller so that it returns a collection of movies. The following Movie controller has a new ListMovies() method which returns a (hard-coded) collection of movies: using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http; using MyWebAPIApp.Models; namespace MyWebAPIApp.Controllers { public class MovieController : ApiController { public IEnumerable<Movie> ListMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=1, Title="King Kong", Director="Jackson"}, new Movie {Id=1, Title="Memento", Director="Nolan"} }; } } } Because we named our action ListMovies(), the default Web API route will never match it. Therefore, we need to add the following custom route to our Global.asax file (at the top of the RegisterRoutes() method): routes.MapHttpRoute( name: "ActionApi", routeTemplate: "api/{controller}/{action}/{id}", defaults: new { id = RouteParameter.Optional } ); This route enables us to invoke the ListMovies() method with the URL /api/movie/listmovies. Now that we have exposed our collection of movies from the server, we can retrieve and display the list of movies using jQuery in our HTML page: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>List Movies</title> </head> <body> <div id="movies"></div> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> listMovies(function (movies) { var strMovies=""; $.each(movies, function (index, movie) { strMovies += "<div>" + movie.Title + "</div>"; }); $("#movies").html(strMovies); }); function listMovies(callback) { $.ajax({ url: "/api/Movie/ListMovies", data: {}, type: "GET", contentType: "application/json;charset=utf-8", }).then(function(movies){ callback(movies); }); } </script> </body> </html>     Inserting a Record with the ASP.NET Web API Now let’s modify our Movie API controller so it supports creating new records: public HttpResponseMessage<Movie> PostMovie(Movie movieToCreate) { // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } The PostMovie() method in the code above accepts a movieToCreate parameter. We don’t actually store the new movie anywhere. In real life, you will want to call a service method to store the new movie in a database. When you create a new resource, such as a new movie, you should return the location of the new resource. In the code above, the URL where the new movie can be retrieved is assigned to the Location header returned in the PostMovie() response. Because the name of our method starts with “Post”, we don’t need to create a custom route. The PostMovie() method can be invoked with the URL /Movie/PostMovie – just as long as the method is invoked within the context of a HTTP POST request. The following HTML page invokes the PostMovie() method. <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "Jackson" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }); function createMovie(movieToCreate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); } </script> </body> </html> This page creates a new movie (the Hobbit) by calling the createMovie() method. The page simply displays the Id of the new movie: The HTTP Post operation is performed with the following call to the jQuery $.ajax() method: $.ajax({ url: "/api/Movie", data: JSON.stringify( movieToCreate ), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { callback(newMovie); } } }); Notice that the type of Ajax request is a POST request. This is required to match the PostMovie() method. Notice, furthermore, that the new movie is converted into JSON using JSON.stringify(). The JSON.stringify() method takes a JavaScript object and converts it into a JSON string. Finally, notice that success is represented with a 201 status code. The HttpStatusCode.Created value returned from the PostMovie() method returns a 201 status code. Updating a Record with the ASP.NET Web API Here’s how we can modify the Movie API controller to support updating an existing record. In this case, we need to create a PUT method to handle an HTTP PUT request: public void PutMovie(Movie movieToUpdate) { if (movieToUpdate.Id == 1) { // Update the movie in the database return; } // If you can't find the movie to update throw new HttpResponseException(HttpStatusCode.NotFound); } Unlike our PostMovie() method, the PutMovie() method does not return a result. The action either updates the database or, if the movie cannot be found, returns an HTTP Status code of 404. The following HTML page illustrates how you can invoke the PutMovie() method: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Put Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToUpdate = { id: 1, title: "The Hobbit", director: "Jackson" }; updateMovie(movieToUpdate, function () { alert("Movie updated!"); }); function updateMovie(movieToUpdate, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToUpdate), type: "PUT", contentType: "application/json;charset=utf-8", statusCode: { 200: function () { callback(); }, 404: function () { alert("Movie not found!"); } } }); } </script> </body> </html> Deleting a Record with the ASP.NET Web API Here’s the code for deleting a movie: public HttpResponseMessage DeleteMovie(int id) { // Delete the movie from the database // Return status code return new HttpResponseMessage(HttpStatusCode.NoContent); } This method simply deletes the movie (well, not really, but pretend that it does) and returns a No Content status code (204). The following page illustrates how you can invoke the DeleteMovie() action: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Delete Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> deleteMovie(1, function () { alert("Movie deleted!"); }); function deleteMovie(id, callback) { $.ajax({ url: "/api/Movie", data: JSON.stringify({id:id}), type: "DELETE", contentType: "application/json;charset=utf-8", statusCode: { 204: function () { callback(); } } }); } </script> </body> </html> Performing Validation How do you perform form validation when using the ASP.NET Web API? Because validation in ASP.NET MVC is driven by the Default Model Binder, and because the Web API uses the Default Model Binder, you get validation for free. Let’s modify our Movie class so it includes some of the standard validation attributes: using System.ComponentModel.DataAnnotations; namespace MyWebAPIApp.Models { public class Movie { public int Id { get; set; } [Required(ErrorMessage="Title is required!")] [StringLength(5, ErrorMessage="Title cannot be more than 5 characters!")] public string Title { get; set; } [Required(ErrorMessage="Director is required!")] public string Director { get; set; } } } In the code above, the Required validation attribute is used to make both the Title and Director properties required. The StringLength attribute is used to require the length of the movie title to be no more than 5 characters. Now let’s modify our PostMovie() action to validate a movie before adding the movie to the database: public HttpResponseMessage PostMovie(Movie movieToCreate) { // Validate movie if (!ModelState.IsValid) { var errors = new JsonArray(); foreach (var prop in ModelState.Values) { if (prop.Errors.Any()) { errors.Add(prop.Errors.First().ErrorMessage); } } return new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } // Add movieToCreate to the database and update primary key movieToCreate.Id = 23; // Build a response that contains the location of the new movie var response = new HttpResponseMessage<Movie>(movieToCreate, HttpStatusCode.Created); var relativePath = "/api/movie/" + movieToCreate.Id; response.Headers.Location = new Uri(Request.RequestUri, relativePath); return response; } If ModelState.IsValid has the value false then the errors in model state are copied to a new JSON array. Each property – such as the Title and Director property — can have multiple errors. In the code above, only the first error message is copied over. The JSON array is returned with a Bad Request status code (400 status code). The following HTML page illustrates how you can invoke our modified PostMovie() action and display any error messages: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Create Movie</title> </head> <body> <script type="text/javascript" src="Scripts/jquery-1.6.2.min.js"></script> <script type="text/javascript"> var movieToCreate = { title: "The Hobbit", director: "" }; createMovie(movieToCreate, function (newMovie) { alert("New movie created with an Id of " + newMovie.Id); }, function (errors) { var strErrors = ""; $.each(errors, function(index, err) { strErrors += "*" + err + "n"; }); alert(strErrors); } ); function createMovie(movieToCreate, success, fail) { $.ajax({ url: "/api/Movie", data: JSON.stringify(movieToCreate), type: "POST", contentType: "application/json;charset=utf-8", statusCode: { 201: function (newMovie) { success(newMovie); }, 400: function (xhr) { var errors = JSON.parse(xhr.responseText); fail(errors); } } }); } </script> </body> </html> The createMovie() function performs an Ajax request and handles either a 201 or a 400 status code from the response. If a 201 status code is returned then there were no validation errors and the new movie was created. If, on the other hand, a 400 status code is returned then there was a validation error. The validation errors are retrieved from the XmlHttpRequest responseText property. The error messages are displayed in an alert: (Please don’t use JavaScript alert dialogs to display validation errors, I just did it this way out of pure laziness) This validation code in our PostMovie() method is pretty generic. There is nothing specific about this code to the PostMovie() method. In the following video, Jon Galloway demonstrates how to create a global Validation filter which can be used with any API controller action: http://www.asp.net/web-api/overview/web-api-routing-and-actions/video-custom-validation His validation filter looks like this: using System.Json; using System.Linq; using System.Net; using System.Net.Http; using System.Web.Http.Controllers; using System.Web.Http.Filters; namespace MyWebAPIApp.Filters { public class ValidationActionFilter:ActionFilterAttribute { public override void OnActionExecuting(HttpActionContext actionContext) { var modelState = actionContext.ModelState; if (!modelState.IsValid) { dynamic errors = new JsonObject(); foreach (var key in modelState.Keys) { var state = modelState[key]; if (state.Errors.Any()) { errors[key] = state.Errors.First().ErrorMessage; } } actionContext.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); } } } } And you can register the validation filter in the Application_Start() method in the Global.asax file like this: GlobalConfiguration.Configuration.Filters.Add(new ValidationActionFilter()); After you register the Validation filter, validation error messages are returned from any API controller action method automatically when validation fails. You don’t need to add any special logic to any of your API controller actions to take advantage of the filter. Querying using OData The OData protocol is an open protocol created by Microsoft which enables you to perform queries over the web. The official website for OData is located here: http://odata.org For example, here are some of the query options which you can use with OData: · $orderby – Enables you to retrieve results in a certain order. · $top – Enables you to retrieve a certain number of results. · $skip – Enables you to skip over a certain number of results (use with $top for paging). · $filter – Enables you to filter the results returned. The ASP.NET Web API supports a subset of the OData protocol. You can use all of the query options listed above when interacting with an API controller. The only requirement is that the API controller action returns its data as IQueryable. For example, the following Movie controller has an action named GetMovies() which returns an IQueryable of movies: public IQueryable<Movie> GetMovies() { return new List<Movie> { new Movie {Id=1, Title="Star Wars", Director="Lucas"}, new Movie {Id=2, Title="King Kong", Director="Jackson"}, new Movie {Id=3, Title="Willow", Director="Lucas"}, new Movie {Id=4, Title="Shrek", Director="Smith"}, new Movie {Id=5, Title="Memento", Director="Nolan"} }.AsQueryable(); } If you enter the following URL in your browser: /api/movie?$top=2&$orderby=Title Then you will limit the movies returned to the top 2 in order of the movie Title. You will get the following results: By using the $top option in combination with the $skip option, you can enable client-side paging. For example, you can use $top and $skip to page through thousands of products, 10 products at a time. The $filter query option is very powerful. You can use this option to filter the results from a query. Here are some examples: Return every movie directed by Lucas: /api/movie?$filter=Director eq ‘Lucas’ Return every movie which has a title which starts with ‘S’: /api/movie?$filter=startswith(Title,’S') Return every movie which has an Id greater than 2: /api/movie?$filter=Id gt 2 The complete documentation for the $filter option is located here: http://www.odata.org/developers/protocols/uri-conventions#FilterSystemQueryOption Summary The goal of this blog entry was to provide you with an overview of the new ASP.NET Web API introduced with the Beta release of ASP.NET 4. In this post, I discussed how you can retrieve, insert, update, and delete data by using jQuery with the Web API. I also discussed how you can use the standard validation attributes with the Web API. You learned how to return validation error messages to the client and display the error messages using jQuery. Finally, we briefly discussed how the ASP.NET Web API supports the OData protocol. For example, you learned how to filter records returned from an API controller action by using the $filter query option. I’m excited about the new Web API. This is a feature which I expect to use with almost every ASP.NET application which I build in the future.

    Read the article

  • Understanding and Implementing a Force based graph layout algorithm

    - by zcourts
    I'm trying to implement a force base graph layout algorithm, based on http://en.wikipedia.org/wiki/Force-based_algorithms_(graph_drawing) My first attempt didn't work so I looked at http://blog.ivank.net/force-based-graph-drawing-in-javascript.html and https://github.com/dhotson/springy I changed my implementation based on what I thought I understood from those two but I haven't managed to get it right and I'm hoping someone can help? JavaScript isn't my strong point so be gentle... If you're wondering why write my own. In reality I have no real reason to write my own I'm just trying to understand how the algorithm is implemented. Especially in my first link, that demo is brilliant. This is what I've come up with //support function.bind - https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/bind#Compatibility if (!Function.prototype.bind) { Function.prototype.bind = function (oThis) { if (typeof this !== "function") { // closest thing possible to the ECMAScript 5 internal IsCallable function throw new TypeError("Function.prototype.bind - what is trying to be bound is not callable"); } var aArgs = Array.prototype.slice.call(arguments, 1), fToBind = this, fNOP = function () {}, fBound = function () { return fToBind.apply(this instanceof fNOP ? this : oThis || window, aArgs.concat(Array.prototype.slice.call(arguments))); }; fNOP.prototype = this.prototype; fBound.prototype = new fNOP(); return fBound; }; } (function() { var lastTime = 0; var vendors = ['ms', 'moz', 'webkit', 'o']; for(var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) { window.requestAnimationFrame = window[vendors[x]+'RequestAnimationFrame']; window.cancelAnimationFrame = window[vendors[x]+'CancelAnimationFrame'] || window[vendors[x]+'CancelRequestAnimationFrame']; } if (!window.requestAnimationFrame) window.requestAnimationFrame = function(callback, element) { var currTime = new Date().getTime(); var timeToCall = Math.max(0, 16 - (currTime - lastTime)); var id = window.setTimeout(function() { callback(currTime + timeToCall); }, timeToCall); lastTime = currTime + timeToCall; return id; }; if (!window.cancelAnimationFrame) window.cancelAnimationFrame = function(id) { clearTimeout(id); }; }()); function Graph(o){ this.options=o; this.vertices={}; this.edges={};//form {vertexID:{edgeID:edge}} } /** *Adds an edge to the graph. If the verticies in this edge are not already in the *graph then they are added */ Graph.prototype.addEdge=function(e){ //if vertex1 and vertex2 doesn't exist in this.vertices add them if(typeof(this.vertices[e.vertex1])==='undefined') this.vertices[e.vertex1]=new Vertex(e.vertex1); if(typeof(this.vertices[e.vertex2])==='undefined') this.vertices[e.vertex2]=new Vertex(e.vertex2); //add the edge if(typeof(this.edges[e.vertex1])==='undefined') this.edges[e.vertex1]={}; this.edges[e.vertex1][e.id]=e; } /** * Add a vertex to the graph. If a vertex with the same ID already exists then * the existing vertex's .data property is replaced with the @param v.data */ Graph.prototype.addVertex=function(v){ if(typeof(this.vertices[v.id])==='undefined') this.vertices[v.id]=v; else this.vertices[v.id].data=v.data; } function Vertex(id,data){ this.id=id; this.data=data?data:{}; //initialize to data.[x|y|z] or generate random number for each this.x = this.data.x?this.data.x:-100 + Math.random()*200; this.y = this.data.y?this.data.y:-100 + Math.random()*200; this.z = this.data.y?this.data.y:-100 + Math.random()*200; //set initial velocity to 0 this.velocity = new Point(0, 0, 0); this.mass=this.data.mass?this.data.mass:Math.random(); this.force=new Point(0,0,0); } function Edge(vertex1ID,vertex2ID){ vertex1ID=vertex1ID?vertex1ID:Math.random() vertex2ID=vertex2ID?vertex2ID:Math.random() this.id=vertex1ID+"->"+vertex2ID; this.vertex1=vertex1ID; this.vertex2=vertex2ID; } function Point(x, y, z) { this.x = x; this.y = y; this.z = z; } Point.prototype.plus=function(p){ this.x +=p.x this.y +=p.y this.z +=p.z } function ForceLayout(o){ this.repulsion = o.repulsion?o.repulsion:200; this.attraction = o.attraction?o.attraction:0.06; this.damping = o.damping?o.damping:0.9; this.graph = o.graph?o.graph:new Graph(); this.total_kinetic_energy =0; this.animationID=-1; } ForceLayout.prototype.draw=function(){ //vertex velocities initialized to (0,0,0) when a vertex is created //vertex positions initialized to random position when created cc=0; do{ this.total_kinetic_energy =0; //for each vertex for(var i in this.graph.vertices){ var thisNode=this.graph.vertices[i]; // running sum of total force on this particular node var netForce=new Point(0,0,0) //for each other node for(var j in this.graph.vertices){ if(thisNode!=this.graph.vertices[j]){ //net-force := net-force + Coulomb_repulsion( this_node, other_node ) netForce.plus(this.CoulombRepulsion( thisNode,this.graph.vertices[j])) } } //for each spring connected to this node for(var k in this.graph.edges[thisNode.id]){ //(this node, node its connected to) //pass id of this node and the node its connected to so hookesattraction //can update the force on both vertices and return that force to be //added to the net force this.HookesAttraction(thisNode.id, this.graph.edges[thisNode.id][k].vertex2 ) } // without damping, it moves forever // this_node.velocity := (this_node.velocity + timestep * net-force) * damping thisNode.velocity.x=(thisNode.velocity.x+thisNode.force.x)*this.damping; thisNode.velocity.y=(thisNode.velocity.y+thisNode.force.y)*this.damping; thisNode.velocity.z=(thisNode.velocity.z+thisNode.force.z)*this.damping; //this_node.position := this_node.position + timestep * this_node.velocity thisNode.x=thisNode.velocity.x; thisNode.y=thisNode.velocity.y; thisNode.z=thisNode.velocity.z; //normalize x,y,z??? //total_kinetic_energy := total_kinetic_energy + this_node.mass * (this_node.velocity)^2 this.total_kinetic_energy +=thisNode.mass*((thisNode.velocity.x+thisNode.velocity.y+thisNode.velocity.z)* (thisNode.velocity.x+thisNode.velocity.y+thisNode.velocity.z)) } cc+=1; }while(this.total_kinetic_energy >0.5) console.log(cc,this.total_kinetic_energy,this.graph) this.cancelAnimation(); } ForceLayout.prototype.HookesAttraction=function(v1ID,v2ID){ var a=this.graph.vertices[v1ID] var b=this.graph.vertices[v2ID] var force=new Point(this.attraction*(b.x - a.x),this.attraction*(b.y - a.y),this.attraction*(b.z - a.z)) // hook's attraction a.force.x += force.x; a.force.y += force.y; a.force.z += force.z; b.force.x += this.attraction*(a.x - b.x); b.force.y += this.attraction*(a.y - b.y); b.force.z += this.attraction*(a.z - b.z); return force; } ForceLayout.prototype.CoulombRepulsion=function(vertex1,vertex2){ //http://en.wikipedia.org/wiki/Coulomb's_law // distance squared = ((x1-x2)*(x1-x2)) + ((y1-y2)*(y1-y2)) + ((z1-z2)*(z1-z2)) var distanceSquared = ( (vertex1.x-vertex2.x)*(vertex1.x-vertex2.x)+ (vertex1.y-vertex2.y)*(vertex1.y-vertex2.y)+ (vertex1.z-vertex2.z)*(vertex1.z-vertex2.z) ); if(distanceSquared==0) distanceSquared = 0.001; var coul = this.repulsion / distanceSquared; return new Point(coul * (vertex1.x-vertex2.x),coul * (vertex1.y-vertex2.y), coul * (vertex1.z-vertex2.z)); } ForceLayout.prototype.animate=function(){ if(this.animating) this.animationID=requestAnimationFrame(this.animate.bind(this)); this.draw(); } ForceLayout.prototype.cancelAnimation=function(){ cancelAnimationFrame(this.animationID); this.animating=false; } ForceLayout.prototype.redraw=function(){ this.animating=true; this.animate(); } $(document).ready(function(){ var g= new Graph(); for(var i=0;i<=100;i++){ var v1=new Vertex(Math.random(), {}) var v2=new Vertex(Math.random(), {}) var e1= new Edge(v1.id,v2.id); g.addEdge(e1); } console.log(g); var l=new ForceLayout({ graph:g }); l.redraw(); });

    Read the article

  • Large Object Heap Fragmentation

    - by Paul Ruane
    The C#/.NET application I am working on is suffering from a slow memory leak. I have used CDB with SOS to try to determine what is happening but the data does not seem to make any sense so I was hoping one of you may have experienced this before. The application is running on the 64 bit framework. It is continuously calculating and serialising data to a remote host and is hitting the Large Object Heap (LOH) a fair bit. However, most of the LOH objects I expect to be transient: once the calculation is complete and has been sent to the remote host, the memory should be freed. What I am seeing, however, is a large number of (live) object arrays interleaved with free blocks of memory, e.g., taking a random segment from the LOH: 0:000> !DumpHeap 000000005b5b1000 000000006351da10 Address MT Size ... 000000005d4f92e0 0000064280c7c970 16147872 000000005e45f880 00000000001661d0 1901752 Free 000000005e62fd38 00000642788d8ba8 1056 <-- 000000005e630158 00000000001661d0 5988848 Free 000000005ebe6348 00000642788d8ba8 1056 000000005ebe6768 00000000001661d0 6481336 Free 000000005f214d20 00000642788d8ba8 1056 000000005f215140 00000000001661d0 7346016 Free 000000005f9168a0 00000642788d8ba8 1056 000000005f916cc0 00000000001661d0 7611648 Free 00000000600591c0 00000642788d8ba8 1056 00000000600595e0 00000000001661d0 264808 Free ... Obviously I would expect this to be the case if my application were creating long-lived, large objects during each calculation. (It does do this and I accept there will be a degree of LOH fragmentation but that is not the problem here.) The problem is the very small (1056 byte) object arrays you can see in the above dump which I cannot see in code being created and which are remaining rooted somehow. Also note that CDB is not reporting the type when the heap segment is dumped: I am not sure if this is related or not. If I dump the marked (<--) object, CDB/SOS reports it fine: 0:015> !DumpObj 000000005e62fd38 Name: System.Object[] MethodTable: 00000642788d8ba8 EEClass: 00000642789d7660 Size: 1056(0x420) bytes Array: Rank 1, Number of elements 128, Type CLASS Element Type: System.Object Fields: None The elements of the object array are all strings and the strings are recognisable as from our application code. Also, I am unable to find their GC roots as the !GCRoot command hangs and never comes back (I have even tried leaving it overnight). So, I would very much appreciate it if anyone could shed any light as to why these small (<85k) object arrays are ending up on the LOH: what situations will .NET put a small object array in there? Also, does anyone happen to know of an alternative way of ascertaining the roots of these objects? Thanks in advance. Update 1 Another theory I came up with late yesterday is that these object arrays started out large but have been shrunk leaving the blocks of free memory that are evident in the memory dumps. What makes me suspicious is that the object arrays always appear to be 1056 bytes long (128 elements), 128 * 8 for the references and 32 bytes of overhead. The idea is that perhaps some unsafe code in a library or in the CLR is corrupting the number of elements field in the array header. Bit of a long shot I know... Update 2 Thanks to Brian Rasmussen (see accepted answer) the problem has been identified as fragmentation of the LOH caused by the string intern table! I wrote a quick test application to confirm this: static void Main() { const int ITERATIONS = 100000; for (int index = 0; index < ITERATIONS; ++index) { string str = "NonInterned" + index; Console.Out.WriteLine(str); } Console.Out.WriteLine("Continue."); Console.In.ReadLine(); for (int index = 0; index < ITERATIONS; ++index) { string str = string.Intern("Interned" + index); Console.Out.WriteLine(str); } Console.Out.WriteLine("Continue?"); Console.In.ReadLine(); } The application first creates and dereferences unique strings in a loop. This is just to prove that the memory does not leak in this scenario. Obviously it should not and it does not. In the second loop, unique strings are created and interned. This action roots them in the intern table. What I did not realise is how the intern table is represented. It appears it consists of a set of pages -- object arrays of 128 string elements -- that are created in the LOH. This is more evident in CDB/SOS: 0:000> .loadby sos mscorwks 0:000> !EEHeap -gc Number of GC Heaps: 1 generation 0 starts at 0x00f7a9b0 generation 1 starts at 0x00e79c3c generation 2 starts at 0x00b21000 ephemeral segment allocation context: none segment begin allocated size 00b20000 00b21000 010029bc 0x004e19bc(5118396) Large object heap starts at 0x01b21000 segment begin allocated size 01b20000 01b21000 01b8ade0 0x00069de0(433632) Total Size 0x54b79c(5552028) ------------------------------ GC Heap Size 0x54b79c(5552028) Taking a dump of the LOH segment reveals the pattern I saw in the leaking application: 0:000> !DumpHeap 01b21000 01b8ade0 ... 01b8a120 793040bc 528 01b8a330 00175e88 16 Free 01b8a340 793040bc 528 01b8a550 00175e88 16 Free 01b8a560 793040bc 528 01b8a770 00175e88 16 Free 01b8a780 793040bc 528 01b8a990 00175e88 16 Free 01b8a9a0 793040bc 528 01b8abb0 00175e88 16 Free 01b8abc0 793040bc 528 01b8add0 00175e88 16 Free total 1568 objects Statistics: MT Count TotalSize Class Name 00175e88 784 12544 Free 793040bc 784 421088 System.Object[] Total 1568 objects Note that the object array size is 528 (rather than 1056) because my workstation is 32 bit and the application server is 64 bit. The object arrays are still 128 elements long. So the moral to this story is to be very careful interning. If the string you are interning is not known to be a member of a finite set then your application will leak due to fragmentation of the LOH, at least in version 2 of the CLR. In our application's case, there is general code in the deserialisation code path that interns entity identifiers during unmarshalling: I now strongly suspect this is the culprit. However, the developer's intentions were obviously good as they wanted to make sure that if the same entity is deserialised multiple times then only one instance of the identifier string will be maintained in memory.

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Implementing an async "read all currently available data from stream" operation

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a console's standard output Stream. Console output streams are of type FileStream; the implementation can cast to that, if needed. There is also an associated StreamReader already present to leverage. There is only one thing I need to implement in this class to achieve my desired functionality: an async "read all the data available this moment" operation. Reading to the end of the stream is not viable because the stream will not end unless the process closes the console output handle, and it will not do that because it is interactive and expecting input before continuing. I will be using that hypothetical async operation to implement event-based notification, which will be more convenient for my callers. The public interface of the class is this: public class ConsoleAutomator { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Remember that the goal here is to read all of the chunk and call event subscribers exactly once for each chunk. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream. private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer (in which case we know that there was no more data to be read during the last read operation), all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Never more than one event for each time data is available to be read Is almost agnostic to the buffer size The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this. Update: I definitely did not communicate the scenario well in my initial writeup. I have since revised the writeup quite a bit, but to be extra sure: The question is about how to implement an async "read all the data available this moment" operation. My apologies to the people who took the time to read and answer without me making my intent clear enough.

    Read the article

  • Something is making my page perform an Ajax call multiple times... [read: I've never been more frust

    - by Jack Webb-Heller
    NOTE: This is a long question. I've explained all the 'basics' at the top and then there's some further (optional) information for if you need it. Hi folks Basically last night this started happening at about 9PM whilst I was trying to restructure my code to make it a bit nicer for the designer to add a few bits to. I tried to fix it until 2AM at which point I gave up. Came back to it this morning, still baffled. I'll be honest with you, I'm a pretty bad Javascript developer. Since starting this project Javascript has been completely new to me and I've just learn as I went along. So please forgive me if my code structure is really bad (perhaps give a couple of pointers on how to improve it?). So, to the problem: to reproduce it, visit http://furnace.howcode.com (it's far from complete). This problem is a little confusing but I'd really appreciate the help. So in the second column you'll see three tabs The 'Newest' tab is selected by default. Scroll to the bottom, and 3 further results should be dynamically fetched via Ajax. Now click on the 'Top Rated' tab. You'll see all the results, but ordered by rating Scroll to the bottom of 'Top Rated'. You'll see SIX results returned. This is where it goes wrong. Only a further three should be returned (there are 18 entries in total). If you're observant you'll notice two 'blocks' of 3 returned. The first 'block' is the second page of results from the 'Newest' tab. The second block is what I just want returned. Did that make any sense? Never mind! So basically I checked this out in Firebug. What happens is, from a 'Clean' page (first load, nothing done) it calls ONE POST request to http://furnace.howcode.com/code/loadmore . But every time you load a new one of the tabs, it makes an ADDITIONAL POST request each time where there should normally only be ONE. So, can you help me? I'd really appreciate it! At this point you could start independent investigation or read on for a little further (optional) information. Thanks! Jack Further Info (may be irrelevant but here for reference): It's almost like there's some Javascript code or something being left behind that duplicates it each time. I thought it might be this code that I use to detect when the browser is scrolled to the bottom: var col = $('#col2'); col.scroll(function(){ if (col.outerHeight() == (col.get(0).scrollHeight - col.scrollTop())) loadMore(1); }); So what I thought was that code was left behind, and so every time you scroll #col2 (which contains different data for each tab) it detected that and added it for #newest as well. So, I made each tab click give #col2 a dynamic class - either .newestcol, .featuredcol, or .topratedcol. And then I changed the var col=$('.newestcol');dynamically so it would only detect it individually for each tab (makin' any sense?!). But hey, that didn't do anything. Another useful tidbit: here's the PHP for http://furnace.howcode.com/code/loadmore: $kind = $this->input->post('kind'); if ($kind == 1){ // kind is 1 - newest $start = $this->input->post('currentpage'); $data['query'] = "SELECT code.id AS codeid, code.title AS codetitle, code.summary AS codesummary, code.author AS codeauthor, code.rating AS rating, code.date, code_tags.*, tags.*, users.firstname AS authorname, users.id AS authorid, GROUP_CONCAT(tags.tag SEPARATOR ', ') AS taggroup FROM code, code_tags, tags, users WHERE users.id = code.author AND code_tags.code_id = code.id AND tags.id = code_tags.tag_id GROUP BY code_id ORDER BY date DESC LIMIT $start, 15 "; $this->load->view('code/ajaxlist',$data); } elseif ($kind == 2) { // kind is 2 - featured So my jQuery code sends a variable 'kind'. If it's 1, it runs the query for Newest, etc. etc. The PHP code for furnace.howcode.com/code/ajaxlist is: <?php // Our query base // SELECT * FROM code ORDER BY date DESC $query = $this->db->query($query); foreach($query->result() as $row) { ?> <script type="text/javascript"> $('#title-<?php echo $row->codeid;?>').click(function() { var form_data = { id: <?php echo $row->codeid; ?> }; $('#col3').fadeOut('slow', function() { $.ajax({ url: "<?php echo site_url('code/viewajax');?>", type: 'POST', data: form_data, success: function(msg) { $('#col3').html(msg); $('#col3').fadeIn('fast'); } }); }); }); </script> <div class="result"> <div class="resulttext"> <div id="title-<?php echo $row->codeid; ?>" class="title"> <?php echo anchor('#',$row->codetitle); ?> </div> <div class="summary"> <?php echo $row->codesummary; ?> </div> <!-- Now insert the 5-star rating system --> <?php include($_SERVER['DOCUMENT_ROOT']."/fivestars/5star.php");?> <div class="bottom"> <div class="author"> Submitted by <?php echo anchor('auth/profile/'.$row->authorid,''.$row->authorname);?> </div> <?php // Now we need to take the GROUP_CONCATted tags and split them using the magic of PHP into seperate tags $tagarray = explode(", ", $row->taggroup); foreach ($tagarray as $tag) { ?> <div class="tagbutton" href="#"> <span><?php echo $tag; ?></span> </div> <?php } ?> </div> </div> </div> <?php } echo "&nbsp;";?> <script type="text/javascript"> var newpage = <?php echo $this->input->post('currentpage') + 15;?>; </script> So that's everything in PHP. The rest you should be able to view with Firebug or by viewing the Source code. I've put all the Tab/clicking/Ajaxloading bits in the tags at the very bottom. There's a comment before it all kicks off. Thanks so much for your help!

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Implementing a robust async stream reader

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output, but that is a detail and does not play an important role. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • Trying to randomise a jQuery content slider

    - by alecrust
    Hi everyone, I'm using a very nice jQuery content slider called Easy Slider on my site that I downloaded from Css Globe. The script is excellent and does just what I want - except I can't make it randomise the list, it always scrolls from left to right or right to left! I'm far from good with JavaScript, so my attempts at solving this have been feeble. Although I'm sure it must be an easy fix! If anyone wouldn't mind taking a glance over the script to see if they can spot what I need to change to make it random it would be greatly appreciated! I've tried contacting the original plugin developer but have had no response yet. The comments on the Easy Slider page didn't bear much fruit either unfortunately. I've pasted the script I'm using on my site below: /* * Easy Slider 1.7 - jQuery plugin * written by Alen Grakalic * http://cssglobe.com/post/4004/easy-slider-15-the-easiest-jquery-plugin-for-sliding * * Copyright (c) 2009 Alen Grakalic (http://cssglobe.com) * Dual licensed under the MIT (MIT-LICENSE.txt) * and GPL (GPL-LICENSE.txt) licenses. * * Built for jQuery library * http://jquery.com * */ (function($) { $.fn.easySlider = function(options){ // default configuration properties var defaults = { prevId: 'prevBtn', prevText: 'Previous', nextId: 'nextBtn', nextText: 'Next', controlsShow: true, controlsBefore: '', controlsAfter: '', controlsFade: true, firstId: 'firstBtn', firstText: 'First', firstShow: false, lastId: 'lastBtn', lastText: 'Last', lastShow: false, vertical: false, speed: 800, auto: false, pause: 7000, continuous: false, numeric: false, numericId: 'controls' }; var options = $.extend(defaults, options); this.each(function() { var obj = $(this); var s = $("li", obj).length; var w = $("li", obj).width(); var h = $("li", obj).height(); var clickable = true; obj.width(w); obj.height(h); obj.css("overflow","hidden"); var ts = s-1; var t = 0; $("ul", obj).css('width',s*w); if(options.continuous){ $("ul", obj).prepend($("ul li:last-child", obj).clone().css("margin-left","-"+ w +"px")); $("ul", obj).append($("ul li:nth-child(2)", obj).clone()); $("ul", obj).css('width',(s+1)*w); }; if(!options.vertical) $("li", obj).css('float','left'); if(options.controlsShow){ var html = options.controlsBefore; if(options.numeric){ html += '<ol id="'+ options.numericId +'"></ol>'; } else { if(options.firstShow) html += '<span id="'+ options.firstId +'"><a href=\"javascript:void(0);\">'+ options.firstText +'</a></span>'; html += ' <span id="'+ options.prevId +'"><a href=\"javascript:void(0);\">'+ options.prevText +'</a></span>'; html += ' <span id="'+ options.nextId +'"><a href=\"javascript:void(0);\">'+ options.nextText +'</a></span>'; if(options.lastShow) html += ' <span id="'+ options.lastId +'"><a href=\"javascript:void(0);\">'+ options.lastText +'</a></span>'; }; html += options.controlsAfter; $(obj).after(html); }; if(options.numeric){ for(var i=0;i<s;i++){ $(document.createElement("li")) .attr('id',options.numericId + (i+1)) .html('<a rel='+ i +' href=\"javascript:void(0);\">'+ (i+1) +'</a>') .appendTo($("#"+ options.numericId)) .click(function(){ animate($("a",$(this)).attr('rel'),true); }); }; } else { $("a","#"+options.nextId).click(function(){ animate("next",true); }); $("a","#"+options.prevId).click(function(){ animate("prev",true); }); $("a","#"+options.firstId).click(function(){ animate("first",true); }); $("a","#"+options.lastId).click(function(){ animate("last",true); }); }; function setCurrent(i){ i = parseInt(i)+1; $("li", "#" + options.numericId).removeClass("current"); $("li#" + options.numericId + i).addClass("current"); }; function adjust(){ if(t>ts) t=0; if(t<0) t=ts; if(!options.vertical) { $("ul",obj).css("margin-left",(t*w*-1)); } else { $("ul",obj).css("margin-left",(t*h*-1)); } clickable = true; if(options.numeric) setCurrent(t); }; function animate(dir,clicked){ if (clickable){ clickable = false; var ot = t; switch(dir){ case "next": t = (ot>=ts) ? (options.continuous ? t+1 : ts) : t+1; break; case "prev": t = (t<=0) ? (options.continuous ? t-1 : 0) : t-1; break; case "first": t = 0; break; case "last": t = ts; break; default: t = dir; break; }; var diff = Math.abs(ot-t); var speed = diff*options.speed; if(!options.vertical) { p = (t*w*-1); $("ul",obj).animate( { marginLeft: p }, { queue:false, duration:speed, complete:adjust } ); } else { p = (t*h*-1); $("ul",obj).animate( { marginTop: p }, { queue:false, duration:speed, complete:adjust } ); }; if(!options.continuous && options.controlsFade){ if(t==ts){ $("a","#"+options.nextId).hide(); $("a","#"+options.lastId).hide(); } else { $("a","#"+options.nextId).show(); $("a","#"+options.lastId).show(); }; if(t==0){ $("a","#"+options.prevId).hide(); $("a","#"+options.firstId).hide(); } else { $("a","#"+options.prevId).show(); $("a","#"+options.firstId).show(); }; }; if(clicked) clearTimeout(timeout); if(options.auto && dir=="next" && !clicked){; timeout = setTimeout(function(){ animate("next",false); },diff*options.speed+options.pause); }; }; }; // init var timeout; if(options.auto){; timeout = setTimeout(function(){ animate("next",false); },options.pause); }; if(options.numeric) setCurrent(0); if(!options.continuous && options.controlsFade){ $("a","#"+options.prevId).hide(); $("a","#"+options.firstId).hide(); }; }); }; })(jQuery); Many thanks again! Alec

    Read the article

  • Android: restful API service

    - by Martyn
    Hey, I'm looking to make a service which I can use to make calls to a web based rest api. I've spent a couple of days looking through stackoverflow.com, reading books and looking at articles whilst playing about with some code and I can't get anything which I'm happy with. Basically I want to start a service on app init then I want to be able to ask that service to request a url and return the results. In the meantime I want to be able to display a progress window or something similar. I've created a service currently which uses IDL, I've read somewhere that you only really need this for cross app communication, so think these needs stripping out but unsure how to do callbacks without it. Also when I hit the post(Config.getURL("login"), values) the app seems to pause for a while (seems weird - thought the idea behind a service was that it runs on a different thread!) Currently I have a service with post and get http methods inside, a couple of AIDL files (for two way communication), a ServiceManager which deals with starting, stopping, binding etc to the service and I'm dynamically creating a Handler with specific code for the callbacks as needed. I don't want anyone to give me a complete code base to work on, but some pointers would be greatly appreciated; even if it's to say I'm doing it completely wrong. I'm pretty new to Android and Java dev so if there are any blindingly obvious mistakes here - please don't think I'm a rubbish developer, I'm just wet behind the ears and would appreciate being told exactly where I'm going wrong. Anyway, code in (mostly) full (really didn't want to put this much code here, but I don't know where I'm going wrong - apologies in advance): public class RestfulAPIService extends Service { final RemoteCallbackList<IRemoteServiceCallback> mCallbacks = new RemoteCallbackList<IRemoteServiceCallback>(); public void onStart(Intent intent, int startId) { super.onStart(intent, startId); } public IBinder onBind(Intent intent) { return binder; } public void onCreate() { super.onCreate(); } public void onDestroy() { super.onDestroy(); mCallbacks.kill(); } private final IRestfulService.Stub binder = new IRestfulService.Stub() { public void doLogin(String username, String password) { Message msg = new Message(); Bundle data = new Bundle(); HashMap<String, String> values = new HashMap<String, String>(); values.put("username", username); values.put("password", password); String result = post(Config.getURL("login"), values); data.putString("response", result); msg.setData(data); msg.what = Config.ACTION_LOGIN; mHandler.sendMessage(msg); } public void registerCallback(IRemoteServiceCallback cb) { if (cb != null) mCallbacks.register(cb); } }; private final Handler mHandler = new Handler() { public void handleMessage(Message msg) { // Broadcast to all clients the new value. final int N = mCallbacks.beginBroadcast(); for (int i = 0; i < N; i++) { try { switch (msg.what) { case Config.ACTION_LOGIN: mCallbacks.getBroadcastItem(i).userLogIn( msg.getData().getString("response")); break; default: super.handleMessage(msg); return; } } catch (RemoteException e) { } } mCallbacks.finishBroadcast(); } public String post(String url, HashMap<String, String> namePairs) {...} public String get(String url) {...} }; A couple of AIDL files: package com.something.android oneway interface IRemoteServiceCallback { void userLogIn(String result); } and package com.something.android import com.something.android.IRemoteServiceCallback; interface IRestfulService { void doLogin(in String username, in String password); void registerCallback(IRemoteServiceCallback cb); } and the service manager: public class ServiceManager { final RemoteCallbackList<IRemoteServiceCallback> mCallbacks = new RemoteCallbackList<IRemoteServiceCallback>(); public IRestfulService restfulService; private RestfulServiceConnection conn; private boolean started = false; private Context context; public ServiceManager(Context context) { this.context = context; } public void startService() { if (started) { Toast.makeText(context, "Service already started", Toast.LENGTH_SHORT).show(); } else { Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.startService(i); started = true; } } public void stopService() { if (!started) { Toast.makeText(context, "Service not yet started", Toast.LENGTH_SHORT).show(); } else { Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.stopService(i); started = false; } } public void bindService() { if (conn == null) { conn = new RestfulServiceConnection(); Intent i = new Intent(); i.setClassName("com.something.android", "com.something.android.RestfulAPIService"); context.bindService(i, conn, Context.BIND_AUTO_CREATE); } else { Toast.makeText(context, "Cannot bind - service already bound", Toast.LENGTH_SHORT).show(); } } protected void destroy() { releaseService(); } private void releaseService() { if (conn != null) { context.unbindService(conn); conn = null; Log.d(LOG_TAG, "unbindService()"); } else { Toast.makeText(context, "Cannot unbind - service not bound", Toast.LENGTH_SHORT).show(); } } class RestfulServiceConnection implements ServiceConnection { public void onServiceConnected(ComponentName className, IBinder boundService) { restfulService = IRestfulService.Stub.asInterface((IBinder) boundService); try { restfulService.registerCallback(mCallback); } catch (RemoteException e) {} } public void onServiceDisconnected(ComponentName className) { restfulService = null; } }; private IRemoteServiceCallback mCallback = new IRemoteServiceCallback.Stub() { public void userLogIn(String result) throws RemoteException { mHandler.sendMessage(mHandler.obtainMessage(Config.ACTION_LOGIN, result)); } }; private Handler mHandler; public void setHandler(Handler handler) { mHandler = handler; } } Service init and bind: // this I'm calling on app onCreate servicemanager = new ServiceManager(this); servicemanager.startService(); servicemanager.bindService(); application = (ApplicationState)this.getApplication(); application.setServiceManager(servicemanager); service function call: // this lot i'm calling as required - in this example for login progressDialog = new ProgressDialog(Login.this); progressDialog.setMessage("Logging you in..."); progressDialog.show(); application = (ApplicationState) getApplication(); servicemanager = application.getServiceManager(); servicemanager.setHandler(mHandler); try { servicemanager.restfulService.doLogin(args[0], args[1]); } catch (RemoteException e) { e.printStackTrace(); } ...later in the same file... Handler mHandler = new Handler() { public void handleMessage(Message msg) { switch (msg.what) { case Config.ACTION_LOGIN: if (progressDialog.isShowing()) { progressDialog.dismiss(); } try { ...process login results... } } catch (JSONException e) { Log.e("JSON", "There was an error parsing the JSON", e); } break; default: super.handleMessage(msg); } } }; Any and all help is greatly appreciated and I'll even buy you a coffee or a beer if you fancy :D Martyn

    Read the article

  • Implementing a robust async stream reader for a console

    - by Jon
    I recently provided an answer to this question: C# - Realtime console output redirection. As often happens, explaining stuff (here "stuff" was how I tackled a similar problem) leads you to greater understanding and/or, as is the case here, "oops" moments. I realized that my solution, as implemented, has a bug. The bug has little practical importance, but it has an extremely large importance to me as a developer: I can't rest easy knowing that my code has the potential to blow up. Squashing the bug is the purpose of this question. I apologize for the long intro, so let's get dirty. I wanted to build a class that allows me to receive input from a Stream in an event-based manner. The stream, in my scenario, is guaranteed to be a FileStream and there is also an associated StreamReader already present to leverage. The public interface of the class is this: public class MyStreamManager { public event EventHandler<ConsoleOutputReadEventArgs> StandardOutputRead; public void StartSendingEvents(); public void StopSendingEvents(); } Obviously this specific scenario has to do with a console's standard output. StartSendingEvents and StopSendingEvents do what they advertise; for the purposes of this discussion, we can assume that events are always being sent without loss of generality. The class uses these two fields internally: protected readonly StringBuilder inputAccumulator = new StringBuilder(); protected readonly byte[] buffer = new byte[256]; The functionality of the class is implemented in the methods below. To get the ball rolling: public void StartSendingEvents(); { this.stopAutomation = false; this.BeginReadAsync(); } To read data out of the Stream without blocking, and also without requiring a carriage return char, BeginRead is called: protected void BeginReadAsync() { if (!this.stopAutomation) { this.StandardOutput.BaseStream.BeginRead( this.buffer, 0, this.buffer.Length, this.ReadHappened, null); } } The challenging part: BeginRead requires using a buffer. This means that when reading from the stream, it is possible that the bytes available to read ("incoming chunk") are larger than the buffer. Since we are only handing off data from the stream to a consumer, and that consumer may well have inside knowledge about the size and/or format of these chunks, I want to call event subscribers exactly once for each chunk. Otherwise the abstraction breaks down and the subscribers have to buffer the incoming data and reconstruct the chunks themselves using said knowledge. This is much less convenient to the calling code, and detracts from the usefulness of my class. Edit: There are comments below correctly stating that since the data is coming from a stream, there is absolutely nothing that the receiver can infer about the structure of the data unless it is fully prepared to parse it. What I am trying to do here is leverage the "flush the output" "structure" that the owner of the console imparts while writing on it. I am prepared to assume (better: allow my caller to have the option to assume) that the OS will pass me the data written between two flushes of the stream in exactly one piece. To this end, if the buffer is full after EndRead, we don't send its contents to subscribers immediately but instead append them to a StringBuilder. The contents of the StringBuilder are only sent back whenever there is no more to read from the stream (thus preserving the chunks). private void ReadHappened(IAsyncResult asyncResult) { var bytesRead = this.StandardOutput.BaseStream.EndRead(asyncResult); if (bytesRead == 0) { this.OnAutomationStopped(); return; } var input = this.StandardOutput.CurrentEncoding.GetString( this.buffer, 0, bytesRead); this.inputAccumulator.Append(input); if (bytesRead < this.buffer.Length) { this.OnInputRead(); // only send back if we 're sure we got it all } this.BeginReadAsync(); // continue "looping" with BeginRead } After any read which is not enough to fill the buffer, all accumulated data is sent to the subscribers: private void OnInputRead() { var handler = this.StandardOutputRead; if (handler == null) { return; } handler(this, new ConsoleOutputReadEventArgs(this.inputAccumulator.ToString())); this.inputAccumulator.Clear(); } (I know that as long as there are no subscribers the data gets accumulated forever. This is a deliberate decision). The good This scheme works almost perfectly: Async functionality without spawning any threads Very convenient to the calling code (just subscribe to an event) Maintains the "chunkiness" of the data; this allows the calling code to use inside knowledge of the data without doing any extra work Is almost agnostic to the buffer size (it will work correctly with any size buffer irrespective of the data being read) The bad That last almost is a very big one. Consider what happens when there is an incoming chunk with length exactly equal to the size of the buffer. The chunk will be read and buffered, but the event will not be triggered. This will be followed up by a BeginRead that expects to find more data belonging to the current chunk in order to send it back all in one piece, but... there will be no more data in the stream. In fact, as long as data is put into the stream in chunks with length exactly equal to the buffer size, the data will be buffered and the event will never be triggered. This scenario may be highly unlikely to occur in practice, especially since we can pick any number for the buffer size, but the problem is there. Solution? Unfortunately, after checking the available methods on FileStream and StreamReader, I can't find anything which lets me peek into the stream while also allowing async methods to be used on it. One "solution" would be to have a thread wait on a ManualResetEvent after the "buffer filled" condition is detected. If the event is not signaled (by the async callback) in a small amount of time, then more data from the stream will not be forthcoming and the data accumulated so far should be sent to subscribers. However, this introduces the need for another thread, requires thread synchronization, and is plain inelegant. Specifying a timeout for BeginRead would also suffice (call back into my code every now and then so I can check if there's data to be sent back; most of the time there will not be anything to do, so I expect the performance hit to be negligible). But it looks like timeouts are not supported in FileStream. Since I imagine that async calls with timeouts are an option in bare Win32, another approach might be to PInvoke the hell out of the problem. But this is also undesirable as it will introduce complexity and simply be a pain to code. Is there an elegant way to get around the problem? Thanks for being patient enough to read all of this.

    Read the article

  • getting Null pointer exception

    - by Abhijeet
    Hi I am getting this message on emulator when I run my android project: The application MediaPlayerDemo_Video.java (process com.android.MediaPlayerDemo_Video) has stopped unexpectedly. Please try again I am trying to run the MediaPlayerDemo_Video.java given in ApiDemos in the Samples given on developer.android.com. The code is : package com.android.MediaPlayerDemo_Video; import android.app.Activity; import android.media.AudioManager; import android.media.MediaPlayer; import android.media.MediaPlayer.OnBufferingUpdateListener; import android.media.MediaPlayer.OnCompletionListener; import android.media.MediaPlayer.OnPreparedListener; import android.media.MediaPlayer.OnVideoSizeChangedListener; import android.os.Bundle; import android.util.Log; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.widget.Toast; public class MediaPlayerDemo_Video extends Activity implements OnBufferingUpdateListener, OnCompletionListener, OnPreparedListener, OnVideoSizeChangedListener, SurfaceHolder.Callback { private static final String TAG = "MediaPlayerDemo"; private int mVideoWidth; private int mVideoHeight; private MediaPlayer mMediaPlayer; private SurfaceView mPreview; private SurfaceHolder holder; private String path; private Bundle extras; private static final String MEDIA = "media"; // private static final int LOCAL_AUDIO = 1; // private static final int STREAM_AUDIO = 2; // private static final int RESOURCES_AUDIO = 3; private static final int LOCAL_VIDEO = 4; private static final int STREAM_VIDEO = 5; private boolean mIsVideoSizeKnown = false; private boolean mIsVideoReadyToBePlayed = false; /** * * Called when the activity is first created. */ @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.mediaplayer_2); mPreview = (SurfaceView) findViewById(R.id.surface); holder = mPreview.getHolder(); holder.addCallback(this); holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); extras = getIntent().getExtras(); } private void playVideo(Integer Media) { doCleanUp(); try { switch (Media) { case LOCAL_VIDEO: // Set the path variable to a local media file path. path = ""; if (path == "") { // Tell the user to provide a media file URL. Toast .makeText( MediaPlayerDemo_Video.this, "Please edit MediaPlayerDemo_Video Activity, " + "and set the path variable to your media file path." + " Your media file must be stored on sdcard.", Toast.LENGTH_LONG).show(); } break; case STREAM_VIDEO: /* * Set path variable to progressive streamable mp4 or * 3gpp format URL. Http protocol should be used. * Mediaplayer can only play "progressive streamable * contents" which basically means: 1. the movie atom has to * precede all the media data atoms. 2. The clip has to be * reasonably interleaved. * */ path = ""; if (path == "") { // Tell the user to provide a media file URL. Toast .makeText( MediaPlayerDemo_Video.this, "Please edit MediaPlayerDemo_Video Activity," + " and set the path variable to your media file URL.", Toast.LENGTH_LONG).show(); } break; } // Create a new media player and set the listeners mMediaPlayer = new MediaPlayer(); mMediaPlayer.setDataSource(path); mMediaPlayer.setDisplay(holder); mMediaPlayer.prepare(); mMediaPlayer.setOnBufferingUpdateListener(this); mMediaPlayer.setOnCompletionListener(this); mMediaPlayer.setOnPreparedListener(this); mMediaPlayer.setOnVideoSizeChangedListener(this); mMediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); } catch (Exception e) { Log.e(TAG, "error: " + e.getMessage(), e); } } public void onBufferingUpdate(MediaPlayer arg0, int percent) { Log.d(TAG, "onBufferingUpdate percent:" + percent); } public void onCompletion(MediaPlayer arg0) { Log.d(TAG, "onCompletion called"); } public void onVideoSizeChanged(MediaPlayer mp, int width, int height) { Log.v(TAG, "onVideoSizeChanged called"); if (width == 0 || height == 0) { Log.e(TAG, "invalid video width(" + width + ") or height(" + height + ")"); return; } mIsVideoSizeKnown = true; mVideoWidth = width; mVideoHeight = height; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void onPrepared(MediaPlayer mediaplayer) { Log.d(TAG, "onPrepared called"); mIsVideoReadyToBePlayed = true; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void surfaceChanged(SurfaceHolder surfaceholder, int i, int j, int k) { Log.d(TAG, "surfaceChanged called"); } public void surfaceDestroyed(SurfaceHolder surfaceholder) { Log.d(TAG, "surfaceDestroyed called"); } public void surfaceCreated(SurfaceHolder holder) { Log.d(TAG, "surfaceCreated called"); playVideo(extras.getInt(MEDIA)); } @Override protected void onPause() { super.onPause(); releaseMediaPlayer(); doCleanUp(); } @Override protected void onDestroy() { super.onDestroy(); releaseMediaPlayer(); doCleanUp(); } private void releaseMediaPlayer() { if (mMediaPlayer != null) { mMediaPlayer.release(); mMediaPlayer = null; } } private void doCleanUp() { mVideoWidth = 0; mVideoHeight = 0; mIsVideoReadyToBePlayed = false; mIsVideoSizeKnown = false; } private void startVideoPlayback() { Log.v(TAG, "startVideoPlayback"); holder.setFixedSize(mVideoWidth, mVideoHeight); mMediaPlayer.start(); } } I think the above message is due to Null pointer exception , however I may be false. I am unable to find where the error is . So , Please someone help me out .

    Read the article

  • RemoteViewsService never gets called whenever we update app widget

    - by user3689160
    I am making one task widget application. This is a simple widget application which can be used to add new tasks by clicking on "+New Task". So ideally what I have done is, I've added a widget first, then when we click on "+New Task" a new activity opens up from where we can type in a new task and it should get updated in the ListView inside the home screen widget. Problem: Now whenever I add a new task, the ListView does not get updated. The data gets inside the database a new is created as well. My onUpdate(...) method gets called as well. There is a WidgetService.java which is a RemoteViewService that is being used to call the RemoteViewFactory but WidgetService.java gets called only when we create the widget, never after that. WidgetService.java never gets called again. I have the following setup MyWidgetProvider.java AppWidgetProvider class for updating the widget and filling its remote views. public class MyWidgetProvider extends AppWidgetProvider { public static int randomNumber=132; static RemoteViews remoteViews = null; @Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { updateList(context,appWidgetManager,appWidgetIds); } public static void updateList(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds){ final int N = appWidgetIds.length; Log.d("MyWidgetProvider", "length="+appWidgetIds.length+""); for (int i = 0; i<N; ++i) { Log.d("MyWidgetProvider", "value"+ i + "="+appWidgetIds[i]); // Calling updateWidgetListView(context, appWidgetIds[i]); to load the listview with data remoteViews = updateWidgetListView(context, appWidgetIds[i]); // Updating the Widget with the new data filled remoteviews appWidgetManager.updateAppWidget(appWidgetIds[i], remoteViews); appWidgetManager.notifyAppWidgetViewDataChanged(appWidgetIds[i], R.id.lv_tasks); } } private static RemoteViews updateWidgetListView(Context context, int appWidgetId) { Intent svcIntent = null; remoteViews = new RemoteViews(context.getPackageName(),R.layout.widget_demo); //RemoteViews Service needed to provide adapter for ListView svcIntent = new Intent(context, WidgetService.class); //passing app widget id to that RemoteViews Service svcIntent.putExtra(AppWidgetManager.EXTRA_APPWIDGET_ID, appWidgetId); //setting a unique Uri to the intent //binding the data from the WidgetService to the svcIntent svcIntent.setData(Uri.parse(svcIntent.toUri(Intent.URI_INTENT_SCHEME))); //setting adapter to listview of the widget remoteViews.setRemoteAdapter(R.id.lv_tasks, svcIntent); //setting click listner on the "New Task" (TextView) of the widget remoteViews.setOnClickPendingIntent(R.id.tv_add_task, buildButtonPendingIntent(context)); return remoteViews; } // Just to create PendingIntent, this will be broadcasted everytime textview containing "New Task" is clicked and OnReceive method of MyWidgetIntentReceiver.java will run public static PendingIntent buildButtonPendingIntent(Context context) { Intent intent = new Intent(); intent.putExtra("TASK_DONE_VALUE_VALID", false); intent.setAction("com.ommzi.intent.action.CLEARTASK"); return PendingIntent.getBroadcast(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT); } // This method is defined here so that we can update the remoteviews from other classes as well, like we'll do it in MyWidgetIntentReceiver.java public static void pushWidgetUpdate(Context context, RemoteViews remoteViews) { ComponentName myWidget = new ComponentName(context, MyWidgetProvider.class); AppWidgetManager manager = AppWidgetManager.getInstance(context); manager.updateAppWidget(myWidget, remoteViews); } } WidgetService.java A RemoteViewsService class for creating a RemoteViewsFactory. public class WidgetService extends RemoteViewsService { @Override public RemoteViewsFactory onGetViewFactory(Intent intent) { int appWidgetId = intent.getIntExtra(AppWidgetManager.EXTRA_APPWIDGET_ID,AppWidgetManager.INVALID_APPWIDGET_ID); return (new ListProvider(this.getApplicationContext(), intent)); } } ListProvider.java A RemoteViewsFactory class for filling the ListView inside the widget. public class ListProvider implements RemoteViewsFactory { private List<TemplateTaskData> tasks; private Context context = null; private int appWidgetId; int increment=0; public ListProvider(Context context, Intent intent) { this.context = context; appWidgetId = intent.getIntExtra(AppWidgetManager.EXTRA_APPWIDGET_ID,AppWidgetManager.INVALID_APPWIDGET_ID); //appWidgetId = Integer.valueOf(intent.getData().getSchemeSpecificPart())- MyWidgetProvider.randomNumber; populateListItem(); } private void populateListItem() { tasks = new ArrayList<TemplateTaskData>(); com.ommzi.database.sqliteDataBase letStartDB = new com.ommzi.database.sqliteDataBase(context); letStartDB.open(); tasks=letStartDB.getData(); letStartDB.close(); } public int getCount() { return tasks.size(); } public long getItemId(int position) { return position; } public RemoteViews getViewAt(int position) { // We are only showing the text view field here as there is limitation on using Checkbox within the widget // We prefer to on an activity for the user to make comprehensive changes as changes inside the widget is always limited. // To view the supported list of views for the widget you can visit http://developer.android.com/guide/topics/appwidgets/index.html final RemoteViews remoteView = new RemoteViews(context.getPackageName(), R.layout.screen_wcitem); TemplateTaskData taskDTO = tasks.get(position); Log.d("My ListProvider", "1"); remoteView.setTextViewText(R.id.wc_add_task, taskDTO.getTaskName()); if(taskDTO.getTaskDone()){ remoteView.setImageViewResource(R.id.imageView1, R.drawable.checked); }else{ remoteView.setImageViewResource(R.id.imageView1, R.drawable.unchecked); } Intent intent = new Intent(); intent.setAction("com.ommzi.intent.action.GETTASK"); intent.putExtra("TASK_DONE_VALUE_VALID", true); intent.putExtra("ROWID", taskDTO.getRowId()); intent.putExtra("TASK_NAME", taskDTO.getTaskName()); intent.putExtra("TASK_DONE_VALUE", taskDTO.getTaskDone()); remoteView.setOnClickPendingIntent(R.id.imageView1, PendingIntent.getBroadcast(context, 0, intent, PendingIntent.FLAG_UPDATE_CURRENT)); Log.d("My ListProvider, Inside getView", "Value of Incrementor=" + increment + "\n ROWID=" + taskDTO.getRowId() + "\n TASK_NAME=" + taskDTO.getTaskName() + "\n TASK_DONE_VALUE=" +taskDTO.getTaskDone() + "\n Total Database Size=" + tasks.size()); return remoteView; } public RemoteViews getLoadingView() { // TODO Auto-generated method stub return null; } public int getViewTypeCount() { // TODO Auto-generated method stub return 1; } public boolean hasStableIds() { // TODO Auto-generated method stub return false; } public void onCreate() { // TODO Auto-generated method stub } public void onDataSetChanged() { // TODO Auto-generated method stub } public void onDestroy() { // TODO Auto-generated method stub } } GetTaskActivity.java This is another activity that is fired from a BroadcastReceiver(MyWidgetIntentReceiver.java). Purpose of this is to get a new task added to the list of task. MyWidgetIntentReceiver.java BroadcastReceiver fired using a pending intent that starts an activity GetTaskActivity.java. Please help me out with this problem. I am seen all the posts here and on other websites no body is having any solution. But I am sure a solution exist to this. Thanks for your help!

    Read the article

  • Where should I put zoomIn in my MapActivity?

    - by Johny
    I'm writing an Android app, and I'd like to zoomIn as soon as the map has been loaded. I get the following error: java.lang.IllegalArgumentException: width and height must be > 0 This MapActivity - width and height must be > 0 question suggests the problem is the zoomIn() method is in the onCreate() method. But I get same error when I put it in the onResume() method. I've been searching for hours and I can't find anything about it at http://developer.android.com or anywhere else... Also I can't find a way to get the time point the map has been loaded. A "MapLoadedListener" or something like that... EDIT Here is my code: public class AMap extends MapActivity{ private final String LOG_TAG = this.getClass().getSimpleName(); private Context mContext; private Chronometer timer; private TextView tvCountdown; private RelativeLayout rl; private MapView mapView; private MapController mapController; private List<Overlay> mapOverlays; private PlayersOverlay playersOverlay; private Drawable drawable; private Builder endDialog; private ContextThemeWrapper ctw; private Handler mHandler = new Handler(); private Player player = new Player(); private StartTask startTask; private EndTask endTask; private MyDBAdapter mdba; private Cursor playersCursor; private UpdateBroadcastReceiver r; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.map_view); mContext = AMap.this; // set map mapView = (MapView) findViewById(R.id.mapview); mapView.setBuiltInZoomControls(true); mapView.setFocusable(true); // find the relative layout rl = (RelativeLayout) findViewById(R.id.rl); // set the chronometer timer = (Chronometer) findViewById(R.id.tv_timer); timer.setBackgroundColor(Color.DKGRAY); // set the countdown textview tvCountdown = (TextView) findViewById(R.id.tv_countdown); // Open DB connection and get players Cursor mdba = new MyDBAdapter(mContext); mdba.open(); playersCursor = mdba.getGame(); // Get this player's id and location Intent starter = this.getIntent(); player.setId(starter.getIntExtra("id", 0)); player.setLatitude(starter.getDoubleExtra("lat", 0)); player.setLongitude(starter.getDoubleExtra("lon", 0)); // Set this player's location as map's center GeoPoint geoPoint = new GeoPoint((int) (player.getLatitude()*1E6), (int) (player.getLongitude()*1E6)); mapController = mapView.getController(); mapController.setCenter(geoPoint); mapController.setZoom(15); Log.d(LOG_TAG, "My playersCursor has "+playersCursor.getCount()+" rows"); // drawable is needed but not used drawable = this.getResources().getDrawable(R.drawable.ic_launcher); // set PlayersOverlay (locations and statuses) playersOverlay = new PlayersOverlay(player.getId(), playersCursor, drawable, this); mapOverlays = mapView.getOverlays(); mapOverlays.add(playersOverlay); mHandler.postDelayed(mUpdateTimeTask, 100); } private Runnable mUpdateTimeTask = new Runnable() { public void run() { int h = mapView.getLayoutParams().height; int w = mapView.getLayoutParams().width; Log.d(LOG_TAG, "w = "+w+" , h = "+h); mHandler.postAtTime(this, System.currentTimeMillis() + 1000); } }; @Override public void onAttachedToWindow(){ Log.d(LOG_TAG, "Attached to Window"); int h = mapView.getLayoutParams().height; int w = mapView.getLayoutParams().width; Log.d(LOG_TAG, " Attached to window: w = "+w+" , h = "+h); //mapController.zoomInFixing(screenPoint.x, screenPoint.y); } public void onWindowFocusChanged(boolean hasFocus){ Log.d(LOG_TAG, "Focus changed to: "+hasFocus); int h = mapView.getLayoutParams().height; int w = mapView.getLayoutParams().width; Log.d(LOG_TAG, " Window focus changed: w = "+w+" , h = "+h); //mapController.zoomInFixing(screenPoint.x, screenPoint.y); } @Override protected void onStart(){ super.onStart(); // Create and register the broadcast receiver for messages from service IntentFilter filter = new IntentFilter(AppConstants.iGAME_UPDATE); r = new UpdateBroadcastReceiver(); registerReceiver(r, filter); // Create the dialog for end of game ctw = new ContextThemeWrapper(mContext, android.R.style.Theme_Translucent_NoTitleBar_Fullscreen); endDialog = new AlertDialog.Builder(ctw); endDialog.setMessage("End of Game"); endDialog.setCancelable(false); endDialog.setNeutralButton("OK", new OnClickListener(){ @Override public void onClick(DialogInterface dialog, int which) { Intent highScores = new Intent(AMap.this, HighScores.class); startActivity(highScores); playersCursor.close(); finish(); } }); } @Override protected void onStop() { if(!playersCursor.isClosed()) playersCursor.close(); unregisterReceiver(r); mdba.close(); super.onStop(); } @Override protected boolean isRouteDisplayed() { return false; } // Receives signal from NetworkService that DB has been updated public class UpdateBroadcastReceiver extends BroadcastReceiver { boolean startSignal, update, endSignal; @Override public void onReceive(Context context, Intent intent) { endSignal = intent.getBooleanExtra("endSignal", false); if(endSignal){ Log.d(LOG_TAG, "Game Update BroadcastReceiver received End Signal"); endTask = new EndTask(); endTask.execute(); return; } update = intent.getBooleanExtra("update", false); if(update){ Log.d(LOG_TAG, "Game Update BroadcastReceiver received game update"); playersCursor.requery(); mapView.invalidate(); return; } startSignal = intent.getBooleanExtra("startSignal", false); if(startSignal){ Log.d(LOG_TAG, "Game Update BroadcastReceiver received Start Signal"); startTask = new StartTask(); startTask.execute(); return; } } } class StartTask extends AsyncTask<Void,Integer,Void>{ private final ToneGenerator tg = new ToneGenerator(AudioManager.STREAM_NOTIFICATION, 100); private final long DELAY = 1200; @Override protected Void doInBackground(Void... params) { int i = 3; while(i>=0){ publishProgress(i); try { Thread.sleep(DELAY); } catch (InterruptedException e) { e.printStackTrace(); } i--; } return null; } @Override protected void onProgressUpdate(Integer... progress){ tg.startTone(ToneGenerator.TONE_PROP_PROMPT); tvCountdown.setText(""+progress[0]); } @Override protected void onPostExecute(Void result) { rl.removeView(tvCountdown); timer.setBase(SystemClock.elapsedRealtime()); timer.start(); //enable screen touches playersOverlay.setGameStarted(true); } } class EndTask extends AsyncTask<Void,Void,Void>{ @Override protected void onPreExecute(){ //disable screen touches playersOverlay.setEndOfGame(true); timer.stop(); } @Override protected Void doInBackground(Void... params) { return null; } @Override protected void onPostExecute(Void result) { try{ endDialog.show(); }catch(Exception e){ Toast.makeText(mContext, "End of game", Toast.LENGTH_LONG); Intent highScores = new Intent(AMap.this, HighScores.class); startActivity(highScores); playersCursor.close(); finish(); } mHandler.removeCallbacks(mUpdateTimeTask); } } }

    Read the article

  • Problem with jquery ajax form on Codeigniter

    - by Code Burn
    Everytime I test the email is send correctly. (I have tested in PC: IE6, IE7, IE8, Safari, Firefox, Chrome. MAC: Safari, Firefox, Chrome.) The _POST done in jquery (javascript). Then when I turn off javascript in my browser nothing happens, because nothing is _POSTed. Nome: Jon Doe Empresa: Star Cargo: Developer Email: [email protected] Telefone: 090909222988 Assunto: Subject here.. But I keep recieving emails like this from costumers: Nome: Empresa: Cargo: Email: Telefone: Assunto: CONTACT_FORM.PHP <form name="frm" id="frm"> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Nome<font style="color:#EE3063;">*</font></div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="Cnome" id="Cnome" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Empresa<font style="color:#EE3063;">*</font></div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="CEmpresa" id="CEmpresa" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Cargo</div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="CCargo" id="CCargo" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Email<font style="color:#EE3063;">*</font></div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="CEmail" id="CEmail" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Telefone</div> <div class="campoFormulario inputDeCampo" ><input class="texto textocinzaescuro" size="31" name="CTelefone" id="CTelefone" value=""/></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >Assunto<font style="color:#EE3063;">*</font></div> <div class="campoFormulario inputDeCampo" ><textarea class="texto textocinzaescuro" name="CAssunto" id="CAssunto" rows="2" cols="28"></textarea></div> <div class="campoFormulario nomeDeCampo texto textocinzaescuro" >&nbsp;</div> <div class="campoFormulario inputDeCampo" style="text-align:right;" ><input id="Cbutton" class="texto textocinzaescuro" type="submit" name="submit" value="Enviar" /></div> </form> <script type="text/javascript"> $(function() { $("#Cbutton").click(function() { if(validarForm()){ var Cnome = $("input#Cnome").val(); var CEmpresa = $("input#CEmpresa").val(); var CEmail = $("input#CEmail").val(); var CCargo = $("input#CCargo").val(); var CTelefone = $("input#CTelefone").val(); var CAssunto = $("textarea#CAssunto").val(); var dataString = 'nome='+ Cnome + '&Empresa=' + CEmpresa + '&Email=' + CEmail + '&Cargo=' + CCargo + '&Telefone=' + CTelefone + '&Assunto=' + CAssunto; //alert (dataString);return false; $.ajax({ type: "POST", url: "http://www.myserver.com/index.php/pt/envia", data: dataString, success: function() { $('#frm').remove(); $('#blocoform').append("<br />Obrigado. <img id='checkmark' src='http://www.myserver.com/public/images/estrutura/ok.gif' /><br />Será contactado brevemente.<br /><br /><br /><br /><br /><br />") .hide() .fadeIn(1500); } }); } return false; }); }); function validarForm(){ var error = 0; if(!validateNome(document.getElementById("Cnome"))){ error = 1 ;} if(!validateNome(document.getElementById("CEmpresa"))){ error = 1 ;} if(!validateEmail(document.getElementById("CEmail"))){ error = 1 ;} if(!validateNome(document.getElementById("CAssunto"))){ error = 1 ;} if(error == 0){ //frm.submit(); return true; }else{ alert('Preencha os campos correctamente.'); return false; } } function validateNome(fld){ if( fld.value.length == 0 ){ fld.style.backgroundColor = '#FFFFCC'; //alert('Descrição é um campo obrigatório.'); return false; }else { fld.style.background = 'White'; return true; } } function trim(s) { return s.replace(/^\s+|\s+$/, ''); } function validateEmail(fld) { var tfld = trim(fld.value); var emailFilter = /^[^@]+@[^@.]+\.[^@]*\w\w$/ ; var illegalChars= /[\(\)\<\>\,\;\:\\\"\[\]]/ ; if (fld.value == "") { fld.style.background = '#FFFFCC'; //alert('Email é um campo obrigatório.'); return false; } else if (!emailFilter.test(tfld)) { //alert('Email inválido.'); fld.style.background = '#FFFFCC'; return false; } else if (fld.value.match(illegalChars)) { fld.style.background = '#FFFFCC'; //alert('Email inválido.'); return false; } else { fld.style.background = 'White'; return true; } } </script> FUNCTION ENVIA (email sender): function envia() { $this->load->helper(array('form', 'url')); $nome = $_POST['nome']; $empresa = $_POST['Empresa']; $cargo = $_POST['Cargo']; $email = $_POST['Email']; $telefone = $_POST['Telefone']; $assunto = $_POST['Assunto']; $mensagem = " Nome:".$nome." Empresa:".$empresa." Cargo:".$cargo." Email:".$email." Telefone:".$telefone." Assunto:".$assunto.""; $headers = 'From: [email protected]' . "\r\n" . 'Reply-To: no-reply' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail('[email protected]', $mensagem, $headers); }

    Read the article

  • TCP RST Reset Every 5 Minutes on Windows 2003 sp2

    - by Dan
    Hey, Recently I had a web developer come to me and ask why he was receiving connection errors in his app that was accessing a sql database. So, I went through my normal trouble shooting steps to isolate or reproduce the issue. I discovered that if I connected to the database using Query Analyzer and let the connection idle for 5 minutes it would disconnect. Meaning... I would no longer be able to refresh my tables or any other object/node within the object browser in Query Analyzer. I would have to right click on the instance and refresh for it to re-establish the connection. Next I went to wireshark and ran a capture on the client pc's nic card. Sure enough it was receiving a TCP RST reset every 5 min if the connection idled longer than 5 min. I also ran a capture on the SQL Server and noticed the TCP RST reset command as well. Attached below is the capture from the client Machine. If someone could please assist... That would be great. -I checked all settings within SQL Server 2000 against another server and they all seem to be the same. -Issue does not occur if I connect to any other SQL server 2000 server. -Issue does not occur if connecting to SQL on the server itself... so only over the network. -I consulted with network team and this is the response back: There are no firewalls or proxies in between SQL Server and your desktop. The traffic flows like this: Desktop-Access Switch-Distro Switch-Core Switch-Datacenter Switch-SQL Server None of the switches have security ACL’s configured on them. Also they stated that NAT was not turned on. -Issue does not occur with SQL server Enterprise Manager. -Ran SQL Profiler at the same time and did not see anything out of the ordinary during the RST I HAVE SEARCHED HIGH AND LOW ON GOOGLE FOR A RESOLUTION FOR THIS ISSUE. NO LUCK! My questions are: What could be causing this? Wrong Sequence number? setting in a router or switch the network team may have over looked? Setting within Windows? Setting within SQL Server 2000 that I have over looked? Better way to utilize Wireshark to find more answers? RST is about 10 from the bottom. No. Time Source Destination Protocol Info 258 24.390708 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [SYN] Seq=0 Len=0 MSS=1260 259 24.401679 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 260 24.401729 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 261 24.402212 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 262 24.413335 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 285 24.466512 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 286 24.466536 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=437 289 24.478168 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [ACK] Seq=38 Ack=1740 Win=64240 Len=0 290 24.480078 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=38 Ack=1740 Win=64240 Len=385 293 24.493629 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1740 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=60 294 24.504637 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=423 Ack=1800 Win=64180 Len=17 295 24.533197 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1800 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=44 296 24.544098 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=440 Ack=1844 Win=64136 Len=17 297 24.544524 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1844 Ack=457 Win=65079 [TCP CHECKSUM INCORRECT] Len=58 298 24.558033 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=457 Ack=1902 Win=64078 Len=31 299 24.558493 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1902 Ack=488 Win=65048 [TCP CHECKSUM INCORRECT] Len=92 300 24.569984 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=488 Ack=1994 Win=63986 Len=70 301 24.577395 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [PSH, ACK] Seq=1994 Ack=558 Win=64978 [TCP CHECKSUM INCORRECT] Len=448 303 24.589834 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [PSH, ACK] Seq=558 Ack=2442 Win=63538 Len=64 304 24.590122 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [FIN, ACK] Seq=2442 Ack=622 Win=64914 [TCP CHECKSUM INCORRECT] Len=0 305 24.601094 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [ACK] Seq=622 Ack=2443 Win=63538 Len=0 306 24.601659 x.x.x.10 x.x.x.99 TCP 2226 > 14488 [FIN, ACK] Seq=622 Ack=2443 Win=63538 Len=0 307 24.601686 x.x.x.99 x.x.x.10 TCP 14488 > 2226 [ACK] Seq=2443 Ack=623 Win=64914 [TCP CHECKSUM INCORRECT] Len=0 321 25.839371 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [SYN] Seq=0 Len=0 MSS=1260 322 25.850291 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 323 25.850321 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 324 25.850660 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 325 25.861573 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 326 25.863103 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 327 25.863130 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=463 328 25.874417 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [ACK] Seq=38 Ack=1766 Win=64240 Len=0 329 25.876315 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=38 Ack=1766 Win=64240 Len=385 330 25.876905 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1766 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=60 331 25.887773 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=423 Ack=1826 Win=64180 Len=17 332 25.888299 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1826 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=44 333 25.899169 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=440 Ack=1870 Win=64136 Len=17 334 25.899574 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1870 Ack=457 Win=65079 [TCP CHECKSUM INCORRECT] Len=58 335 25.910618 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=457 Ack=1928 Win=64078 Len=31 336 25.911051 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=1928 Ack=488 Win=65048 [TCP CHECKSUM INCORRECT] Len=92 337 25.922068 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=488 Ack=2020 Win=63986 Len=70 338 25.922500 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2020 Ack=558 Win=64978 [TCP CHECKSUM INCORRECT] Len=34 339 25.933621 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=558 Ack=2054 Win=63952 Len=29 340 25.941165 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2054 Ack=587 Win=64949 [TCP CHECKSUM INCORRECT] Len=54 341 25.952164 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=587 Ack=2108 Win=63898 Len=17 342 25.952993 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2108 Ack=604 Win=64932 [TCP CHECKSUM INCORRECT] Len=72 343 25.963889 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=604 Ack=2180 Win=63826 Len=17 344 25.964366 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2180 Ack=621 Win=64915 [TCP CHECKSUM INCORRECT] Len=52 345 25.975253 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=621 Ack=2232 Win=63774 Len=17 346 25.975590 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2232 Ack=638 Win=64898 [TCP CHECKSUM INCORRECT] Len=32 347 25.986588 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=638 Ack=2264 Win=63742 Len=167 348 25.987262 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2264 Ack=805 Win=64731 [TCP CHECKSUM INCORRECT] Len=512 349 25.998464 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=805 Ack=2776 Win=63230 Len=89 350 25.998861 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2776 Ack=894 Win=64642 [TCP CHECKSUM INCORRECT] Len=46 351 26.009849 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=894 Ack=2822 Win=63184 Len=17 352 26.010175 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2822 Ack=911 Win=64625 [TCP CHECKSUM INCORRECT] Len=80 353 26.021220 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=911 Ack=2902 Win=63104 Len=33 354 26.022613 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [PSH, ACK] Seq=2902 Ack=944 Win=64592 [TCP CHECKSUM INCORRECT] Len=498 355 26.034018 x.x.x.10 x.x.x.99 TCP 2226 > 14492 [PSH, ACK] Seq=944 Ack=3400 Win=64240 Len=89 356 26.046501 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [SYN] Seq=0 Len=0 MSS=1260 357 26.057323 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [SYN, ACK] Seq=0 Ack=1 Win=64240 Len=0 MSS=1460 358 26.057355 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 359 26.057661 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1 Ack=1 Win=65535 [TCP CHECKSUM INCORRECT] Len=42 361 26.068606 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=1 Ack=43 Win=64198 Len=37 362 26.070087 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=43 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=1260 363 26.070113 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1303 Ack=38 Win=65498 [TCP CHECKSUM INCORRECT] Len=485 364 26.081336 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [ACK] Seq=38 Ack=1788 Win=64240 Len=0 365 26.083330 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=38 Ack=1788 Win=64240 Len=385 366 26.083943 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1788 Ack=423 Win=65113 [TCP CHECKSUM INCORRECT] Len=46 368 26.094921 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=423 Ack=1834 Win=64194 Len=17 369 26.095317 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [PSH, ACK] Seq=1834 Ack=440 Win=65096 [TCP CHECKSUM INCORRECT] Len=48 370 26.107553 x.x.x.10 x.x.x.99 TCP 2226 > 14493 [PSH, ACK] Seq=440 Ack=1882 Win=64146 Len=877 371 26.241285 x.x.x.99 x.x.x.10 TCP 14492 > 2226 [ACK] Seq=3400 Ack=1033 Win=64503 [TCP CHECKSUM INCORRECT] Len=0 372 26.241307 x.x.x.99 x.x.x.10 TCP 14493 > 2226 [ACK] Seq=1882 Ack=1317 Win=65535 [TCP CHECKSUM INCORRECT] Len=0 653 55.913838 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 > 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 654 55.924547 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 > 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 910 85.887176 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 > 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 911 85.898010 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 > 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1155 115.859520 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1156 115.870285 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1395 145.934403 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1396 145.945938 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1649 175.906767 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1650 175.917741 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 1887 205.881080 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 1888 205.891818 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2112 235.854408 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2113 235.865482 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2398 265.928342 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2399 265.939242 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2671 295.900714 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2672 295.911590 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2880 315.705029 x.x.x.10 x.x.x.99 TCP 2226 14493 [RST] Seq=1317 Len=0 2973 325.975607 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive] 14492 2226 [ACK] Seq=3399 Ack=1033 Win=64503 Len=1 2974 325.986337 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive ACK] 2226 14492 [ACK] Seq=1033 Ack=3400 Win=64240 Len=0 2975 326.154327 x.x.x.10 x.x.x.99 TCP [TCP Keep-Alive] 2226 14492 [ACK] Seq=1032 Ack=3400 Win=64240 Len=1 2976 326.154350 x.x.x.99 x.x.x.10 TCP [TCP Keep-Alive ACK] 14492 2226 [ACK] Seq=3400 Ack=1033 Win=64503 [TCP CHECKSUM INCORRECT] Len=0

    Read the article

  • Sendmail Failing to Forward Locally Addressed Mail to Exchange Server

    - by DomainSoil
    I've recently gained employment as a web developer with a small company. What they neglected to tell me upon hire was that I would be administrating the server along with my other daily duties. Now, truth be told, I'm not clueless when it comes to these things, but this is my first rodeo working with a rack server/console.. However, I'm confident that I will be able to work through any solutions you provide. Short Description: When a customer places an order via our (Magento CE 1.8.1.0) website, a copy of said order is supposed to be BCC'd to our sales manager. I say supposed because this was a working feature before the old administrator left. Long Description: Shortly after I started, we had a server crash which required a server restart. After restart, we noticed a few features on our site weren't working, but all those have been cleaned up except this one. I had to create an account on our server for root access. When a customer places an order, our sites software (Magento CE 1.8.1.0) is configured to BCC the customers order email to our sales manager. We use a Microsoft Exchange 2007 Server for our mail, which is hosted on a different machine (in-house) that I don't have access to ATM, but I'm sure I could if needed. As far as I can tell, all other external emails work.. Only INTERNAL email addresses fail to deliver. I know this because I've also tested my own internal address via our website. I set up an account with an internal email, made a test order, and never received the email. I changed my email for the account to an external GMail account, and received emails as expected. Let's dive into the logs and config's. For privacy/security reasons, names have been changed to the following: domain.com = Our Top Level Domain. email.local = Our Exchange Server. example.com = ANY other TLD. OLDadmin = Our previous Server Administrator. NEWadmin = Me. SALES@ = Our Sales Manager. Customer# = A Customer. Here's a list of the programs and config files used that hold relevant for this issue: Server: > [root@www ~]# cat /etc/centos-release CentOS release 6.3 (final) Sendmail: > [root@www ~]# sendmail -d0.1 -bt < /dev/null Version 8.14.4 ========SYSTEM IDENTITY (after readcf)======== (short domain name) $w = domain (canonical domain name) $j = domain.com (subdomain name) $m = com (node name) $k = www.domain.com > [root@www ~]# rpm -qa | grep -i sendmail sendmail-cf-8.14.4-8.e16.noarch sendmail-8.14-4-8.e16.x86_64 nslookup: > [root@www ~]# nslookup email.local Name: email.local Address: 192.168.1.50 hostname: > [root@www ~]# hostname www.domain.com /etc/mail/access: > [root@www ~]# vi /etc/mail/access Connect:localhost.localdomain RELAY Connect:localhost RELAY Connect:127.0.0.1 RELAY /etc/mail/domaintable: > [root@www ~]# vi /etc/mail/domaintable # /etc/mail/local-host-names: > [root@www ~]# vi /etc/mail/local-host-names # /etc/mail/mailertable: > [root@www ~]# vi /etc/mail/mailertable # /etc/mail/sendmail.cf: > [root@www ~]# vi /etc/mail/sendmail.cf ###################################################################### ##### ##### DO NOT EDIT THIS FILE! Only edit the source .mc file. ##### ###################################################################### ###################################################################### ##### $Id: cfhead.m4,v 8.120 2009/01/23 22:39:21 ca Exp $ ##### ##### $Id: cf.m4,v 8.32 1999/02/07 07:26:14 gshapiro Exp $ ##### ##### setup for linux ##### ##### $Id: linux.m4,v 8.13 2000/09/17 17:30:00 gshapiro Exp $ ##### ##### $Id: local_procmail.m4,v 8.22 2002/11/17 04:24:19 ca Exp $ ##### ##### $Id: no_default_msa.m4,v 8.2 2001/02/14 05:03:22 gshapiro Exp $ ##### ##### $Id: smrsh.m4,v 8.14 1999/11/18 05:06:23 ca Exp $ ##### ##### $Id: mailertable.m4,v 8.25 2002/06/27 23:23:57 gshapiro Exp $ ##### ##### $Id: virtusertable.m4,v 8.23 2002/06/27 23:23:57 gshapiro Exp $ ##### ##### $Id: redirect.m4,v 8.15 1999/08/06 01:47:36 gshapiro Exp $ ##### ##### $Id: always_add_domain.m4,v 8.11 2000/09/12 22:00:53 ca Exp $ ##### ##### $Id: use_cw_file.m4,v 8.11 2001/08/26 20:58:57 gshapiro Exp $ ##### ##### $Id: use_ct_file.m4,v 8.11 2001/08/26 20:58:57 gshapiro Exp $ ##### ##### $Id: local_procmail.m4,v 8.22 2002/11/17 04:24:19 ca Exp $ ##### ##### $Id: access_db.m4,v 8.27 2006/07/06 21:10:10 ca Exp $ ##### ##### $Id: blacklist_recipients.m4,v 8.13 1999/04/02 02:25:13 gshapiro Exp $ ##### ##### $Id: accept_unresolvable_domains.m4,v 8.10 1999/02/07 07:26:07 gshapiro Exp $ ##### ##### $Id: masquerade_envelope.m4,v 8.9 1999/02/07 07:26:10 gshapiro Exp $ ##### ##### $Id: masquerade_entire_domain.m4,v 8.9 1999/02/07 07:26:10 gshapiro Exp $ ##### ##### $Id: proto.m4,v 8.741 2009/12/11 00:04:53 ca Exp $ ##### # level 10 config file format V10/Berkeley # override file safeties - setting this option compromises system security, # addressing the actual file configuration problem is preferred # need to set this before any file actions are encountered in the cf file #O DontBlameSendmail=safe # default LDAP map specification # need to set this now before any LDAP maps are defined #O LDAPDefaultSpec=-h localhost ################## # local info # ################## # my LDAP cluster # need to set this before any LDAP lookups are done (including classes) #D{sendmailMTACluster}$m Cwlocalhost # file containing names of hosts for which we receive email Fw/etc/mail/local-host-names # my official domain name # ... define this only if sendmail cannot automatically determine your domain #Dj$w.Foo.COM # host/domain names ending with a token in class P are canonical CP. # "Smart" relay host (may be null) DSemail.local # operators that cannot be in local usernames (i.e., network indicators) CO @ % ! # a class with just dot (for identifying canonical names) C.. # a class with just a left bracket (for identifying domain literals) C[[ # access_db acceptance class C{Accept}OK RELAY C{ResOk}OKR # Hosts for which relaying is permitted ($=R) FR-o /etc/mail/relay-domains # arithmetic map Karith arith # macro storage map Kmacro macro # possible values for TLS_connection in access map C{Tls}VERIFY ENCR # who I send unqualified names to if FEATURE(stickyhost) is used # (null means deliver locally) DRemail.local. # who gets all local email traffic # ($R has precedence for unqualified names if FEATURE(stickyhost) is used) DHemail.local. # dequoting map Kdequote dequote # class E: names that should be exposed as from this host, even if we masquerade # class L: names that should be delivered locally, even if we have a relay # class M: domains that should be converted to $M # class N: domains that should not be converted to $M #CL root C{E}root C{w}localhost.localdomain C{M}domain.com # who I masquerade as (null for no masquerading) (see also $=M) DMdomain.com # my name for error messages DnMAILER-DAEMON # Mailer table (overriding domains) Kmailertable hash -o /etc/mail/mailertable.db # Virtual user table (maps incoming users) Kvirtuser hash -o /etc/mail/virtusertable.db CPREDIRECT # Access list database (for spam stomping) Kaccess hash -T<TMPF> -o /etc/mail/access.db # Configuration version number DZ8.14.4 /etc/mail/sendmail.mc: > [root@www ~]# vi /etc/mail/sendmail.mc divert(-1)dnl dnl # dnl # This is the sendmail macro config file for m4. If you make changes to dnl # /etc/mail/sendmail.mc, you will need to regenerate the dnl # /etc/mail/sendmail.cf file by confirming that the sendmail-cf package is dnl # installed and then performing a dnl # dnl # /etc/mail/make dnl # include(`/usr/share/sendmail-cf/m4/cf.m4')dnl VERSIONID(`setup for linux')dnl OSTYPE(`linux')dnl dnl # dnl # Do not advertize sendmail version. dnl # dnl define(`confSMTP_LOGIN_MSG', `$j Sendmail; $b')dnl dnl # dnl # default logging level is 9, you might want to set it higher to dnl # debug the configuration dnl # dnl define(`confLOG_LEVEL', `9')dnl dnl # dnl # Uncomment and edit the following line if your outgoing mail needs to dnl # be sent out through an external mail server: dnl # define(`SMART_HOST', `email.local')dnl dnl # define(`confDEF_USER_ID', ``8:12'')dnl dnl define(`confAUTO_REBUILD')dnl define(`confTO_CONNECT', `1m')dnl define(`confTRY_NULL_MX_LIST', `True')dnl define(`confDONT_PROBE_INTERFACES', `True')dnl define(`PROCMAIL_MAILER_PATH', `/usr/bin/procmail')dnl define(`ALIAS_FILE', `/etc/aliases')dnl define(`STATUS_FILE', `/var/log/mail/statistics')dnl define(`UUCP_MAILER_MAX', `2000000')dnl define(`confUSERDB_SPEC', `/etc/mail/userdb.db')dnl define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl define(`confAUTH_OPTIONS', `A')dnl dnl # dnl # The following allows relaying if the user authenticates, and disallows dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links dnl # dnl define(`confAUTH_OPTIONS', `A p')dnl dnl # dnl # PLAIN is the preferred plaintext authentication method and used by dnl # Mozilla Mail and Evolution, though Outlook Express and other MUAs do dnl # use LOGIN. Other mechanisms should be used if the connection is not dnl # guaranteed secure. dnl # Please remember that saslauthd needs to be running for AUTH. dnl # dnl TRUST_AUTH_MECH(`EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl define(`confAUTH_MECHANISMS', `EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN')dnl dnl # dnl # Rudimentary information on creating certificates for sendmail TLS: dnl # cd /etc/pki/tls/certs; make sendmail.pem dnl # Complete usage: dnl # make -C /etc/pki/tls/certs usage dnl # dnl define(`confCACERT_PATH', `/etc/pki/tls/certs')dnl dnl define(`confCACERT', `/etc/pki/tls/certs/ca-bundle.crt')dnl dnl define(`confSERVER_CERT', `/etc/pki/tls/certs/sendmail.pem')dnl dnl define(`confSERVER_KEY', `/etc/pki/tls/certs/sendmail.pem')dnl dnl # dnl # This allows sendmail to use a keyfile that is shared with OpenLDAP's dnl # slapd, which requires the file to be readble by group ldap dnl # dnl define(`confDONT_BLAME_SENDMAIL', `groupreadablekeyfile')dnl dnl # dnl define(`confTO_QUEUEWARN', `4h')dnl dnl define(`confTO_QUEUERETURN', `5d')dnl dnl define(`confQUEUE_LA', `12')dnl dnl define(`confREFUSE_LA', `18')dnl define(`confTO_IDENT', `0')dnl dnl FEATURE(delay_checks)dnl FEATURE(`no_default_msa', `dnl')dnl FEATURE(`smrsh', `/usr/sbin/smrsh')dnl FEATURE(`mailertable', `hash -o /etc/mail/mailertable.db')dnl FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl FEATURE(redirect)dnl FEATURE(always_add_domain)dnl FEATURE(use_cw_file)dnl FEATURE(use_ct_file)dnl dnl # dnl # The following limits the number of processes sendmail can fork to accept dnl # incoming messages or process its message queues to 20.) sendmail refuses dnl # to accept connections once it has reached its quota of child processes. dnl # dnl define(`confMAX_DAEMON_CHILDREN', `20')dnl dnl # dnl # Limits the number of new connections per second. This caps the overhead dnl # incurred due to forking new sendmail processes. May be useful against dnl # DoS attacks or barrages of spam. (As mentioned below, a per-IP address dnl # limit would be useful but is not available as an option at this writing.) dnl # dnl define(`confCONNECTION_RATE_THROTTLE', `3')dnl dnl # dnl # The -t option will retry delivery if e.g. the user runs over his quota. dnl # FEATURE(local_procmail, `', `procmail -t -Y -a $h -d $u')dnl FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl FEATURE(`blacklist_recipients')dnl EXPOSED_USER(`root')dnl dnl # dnl # For using Cyrus-IMAPd as POP3/IMAP server through LMTP delivery uncomment dnl # the following 2 definitions and activate below in the MAILER section the dnl # cyrusv2 mailer. dnl # dnl define(`confLOCAL_MAILER', `cyrusv2')dnl dnl define(`CYRUSV2_MAILER_ARGS', `FILE /var/lib/imap/socket/lmtp')dnl dnl # dnl # The following causes sendmail to only listen on the IPv4 loopback address dnl # 127.0.0.1 and not on any other network devices. Remove the loopback dnl # address restriction to accept email from the internet or intranet. dnl # DAEMON_OPTIONS(`Port=smtp,Addr=127.0.0.1, Name=MTA')dnl dnl # dnl # The following causes sendmail to additionally listen to port 587 for dnl # mail from MUAs that authenticate. Roaming users who can't reach their dnl # preferred sendmail daemon due to port 25 being blocked or redirected find dnl # this useful. dnl # dnl DAEMON_OPTIONS(`Port=submission, Name=MSA, M=Ea')dnl dnl # dnl # The following causes sendmail to additionally listen to port 465, but dnl # starting immediately in TLS mode upon connecting. Port 25 or 587 followed dnl # by STARTTLS is preferred, but roaming clients using Outlook Express can't dnl # do STARTTLS on ports other than 25. Mozilla Mail can ONLY use STARTTLS dnl # and doesn't support the deprecated smtps; Evolution <1.1.1 uses smtps dnl # when SSL is enabled-- STARTTLS support is available in version 1.1.1. dnl # dnl # For this to work your OpenSSL certificates must be configured. dnl # dnl DAEMON_OPTIONS(`Port=smtps, Name=TLSMTA, M=s')dnl dnl # dnl # The following causes sendmail to additionally listen on the IPv6 loopback dnl # device. Remove the loopback address restriction listen to the network. dnl # dnl DAEMON_OPTIONS(`port=smtp,Addr=::1, Name=MTA-v6, Family=inet6')dnl dnl # dnl # enable both ipv6 and ipv4 in sendmail: dnl # dnl DAEMON_OPTIONS(`Name=MTA-v4, Family=inet, Name=MTA-v6, Family=inet6') dnl # dnl # We strongly recommend not accepting unresolvable domains if you want to dnl # protect yourself from spam. However, the laptop and users on computers dnl # that do not have 24x7 DNS do need this. dnl # FEATURE(`accept_unresolvable_domains')dnl dnl # dnl FEATURE(`relay_based_on_MX')dnl dnl # dnl # Also accept email sent to "localhost.localdomain" as local email. dnl # LOCAL_DOMAIN(`localhost.localdomain')dnl dnl # dnl # The following example makes mail from this host and any additional dnl # specified domains appear to be sent from mydomain.com dnl # MASQUERADE_AS(`domain.com')dnl dnl # dnl # masquerade not just the headers, but the envelope as well dnl FEATURE(masquerade_envelope)dnl dnl # dnl # masquerade not just @mydomainalias.com, but @*.mydomainalias.com as well dnl # FEATURE(masquerade_entire_domain)dnl dnl # MASQUERADE_DOMAIN(domain.com)dnl dnl MASQUERADE_DOMAIN(localhost.localdomain)dnl dnl MASQUERADE_DOMAIN(mydomainalias.com)dnl dnl MASQUERADE_DOMAIN(mydomain.lan)dnl MAILER(smtp)dnl MAILER(procmail)dnl dnl MAILER(cyrusv2)dnl /etc/mail/trusted-users: > [root@www ~]# vi /etc/mail/trusted-users # /etc/mail/virtusertable: > [root@www ~]# vi /etc/mail/virtusertable [email protected] [email protected] [email protected] [email protected] /etc/hosts: > [root@www ~]# vi /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.1.50 email.local I've only included the "local info" part of sendmail.cf, to save space. If there are any files that I've missed, please advise so I may produce them. Now that that's out of the way, lets look at some entries from /var/log/maillog. The first entry is from an order BEFORE the crash, when the site was working as expected. ##Order 200005374 Aug 5, 2014 7:06:38 AM## Aug 5 07:06:39 www sendmail[26149]: s75C6dqB026149: from=OLDadmin, size=11091, class=0, nrcpts=2, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 07:06:39 www sendmail[26150]: s75C6dXe026150: from=<[email protected]>, size=11257, class=0, nrcpts=2, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 07:06:39 www sendmail[26149]: s75C6dqB026149: [email protected],=?utf-8?B?dGhvbWFzICBHaWxsZXNwaWU=?= <[email protected]>, ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71091, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75C6dXe026150 Message accepted for delivery) Aug 5 07:06:40 www sendmail[26152]: s75C6dXe026150: to=<[email protected]>,<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=161257, relay=email.local. [192.168.1.50], dsn=2.0.0, stat=Sent ( <[email protected]> Queued mail for delivery) This next entry from maillog is from an order AFTER the crash. ##Order 200005375 Aug 5, 2014 9:45:25 AM## Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: from=OLDadmin, size=11344, class=0, nrcpts=2, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:45:26 www sendmail[30022]: s75EjQm1030022: <[email protected]>... User unknown Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: [email protected], ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71344, relay=[127.0.0.1] [127.0.0.1], dsn=5.1.1, stat=User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm1030022: from=<[email protected]>, size=11500, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: to==?utf-8?B?S2VubmV0aCBCaWViZXI=?= <[email protected]>, ctladdr=OLDadmin (501/501), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=71344, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75EjQm1030022 Message accepted for delivery) Aug 5 09:45:26 www sendmail[30021]: s75EjQ4O030021: s75EjQ4P030021: DSN: User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm3030022: <[email protected]>... User unknown Aug 5 09:45:26 www sendmail[30021]: s75EjQ4P030021: to=OLDadmin, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=42368, relay=[127.0.0.1] [127.0.0.1], dsn=5.1.1, stat=User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm3030022: from=<>, size=12368, class=0, nrcpts=0, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4P030021: s75EjQ4Q030021: return to sender: User unknown Aug 5 09:45:26 www sendmail[30022]: s75EjQm5030022: from=<>, size=14845, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:45:26 www sendmail[30021]: s75EjQ4Q030021: to=postmaster, delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=43392, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75EjQm5030022 Message accepted for delivery) Aug 5 09:45:26 www sendmail[30025]: s75EjQm5030022: to=root, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=45053, dsn=2.0.0, stat=Sent Aug 5 09:45:27 www sendmail[30024]: s75EjQm1030022: to=<[email protected]>, delay=00:00:01, xdelay=00:00:01, mailer=relay, pri=131500, relay=email.local. [192.168.1.50], dsn=2.0.0, stat=Sent ( <[email protected]> Queued mail for delivery) To add a little more, I think I've pinpointed the actual crash event. ##THE CRASH## Aug 5 09:39:46 www sendmail[3251]: restarting /usr/sbin/sendmail due to signal Aug 5 09:39:46 www sm-msp-queue[3260]: restarting /usr/sbin/sendmail due to signal Aug 5 09:39:46 www sm-msp-queue[29370]: starting daemon (8.14.4): queueing@01:00:00 Aug 5 09:39:47 www sendmail[29372]: starting daemon (8.14.4): SMTP+queueing@01:00:00 Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: Authentication-Warning: www.domain.com: OLDadmin set sender to root using -f Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: Authentication-Warning: www.domain.com: OLDadmin set sender to root using -f Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: from=root, size=1426, class=0, nrcpts=1, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: from=root, size=1426, class=0, nrcpts=1, msgid=<[email protected]>, relay=OLDadmin@localhost Aug 5 09:40:02 www sendmail[29466]: s75Ee23t029466: from=<[email protected]>, size=1784, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:40:02 www sendmail[29466]: s75Ee23t029466: to=<[email protected]>, delay=00:00:00, mailer=local, pri=31784, dsn=4.4.3, stat=queued Aug 5 09:40:02 www sendmail[29467]: s75Ee2wh029467: from=<[email protected]>, size=1784, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA, relay=localhost.localdomain [127.0.0.1] Aug 5 09:40:02 www sendmail[29467]: s75Ee2wh029467: to=<[email protected]>, delay=00:00:00, mailer=local, pri=31784, dsn=4.4.3, stat=queued Aug 5 09:40:02 www sendmail[29464]: s75Ee2IF029464: to=OLDadmin, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31426, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75Ee23t029466 Message accepted for delivery) Aug 5 09:40:02 www sendmail[29465]: s75Ee2vT029465: to=OLDadmin, ctladdr=root (0/0), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=31426, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (s75Ee2wh029467 Message accepted for delivery) Aug 5 09:40:06 www sm-msp-queue[29370]: restarting /usr/sbin/sendmail due to signal Aug 5 09:40:06 www sendmail[29372]: restarting /usr/sbin/sendmail due to signal Aug 5 09:40:06 www sm-msp-queue[29888]: starting daemon (8.14.4): queueing@01:00:00 Aug 5 09:40:06 www sendmail[29890]: starting daemon (8.14.4): SMTP+queueing@01:00:00 Aug 5 09:40:06 www sendmail[29891]: s75Ee23t029466: to=<[email protected]>, delay=00:00:04, mailer=local, pri=121784, dsn=5.1.1, stat=User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee23t029466: s75Ee6xY029891: DSN: User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee6xY029891: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=33035, dsn=2.0.0, stat=Sent Aug 5 09:40:06 www sendmail[29891]: s75Ee2wh029467: to=<[email protected]>, delay=00:00:04, mailer=local, pri=121784, dsn=5.1.1, stat=User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee2wh029467: s75Ee6xZ029891: DSN: User unknown Aug 5 09:40:06 www sendmail[29891]: s75Ee6xZ029891: to=<[email protected]>, delay=00:00:00, xdelay=00:00:00, mailer=local, pri=33035, dsn=2.0.0, stat=Sent Something to note about the maillog's: Before the crash, the msgid included localhost.localdomain; after the crash it's been domain.com. Thanks to all who take the time to read and look into this issue. I appreciate it and look forward to tackling this issue together.

    Read the article

< Previous Page | 452 453 454 455 456 457  | Next Page >