Search Results

Search found 4753 results on 191 pages for 'master slave'.

Page 159/191 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • Git workflow for two tight-knit projects

    - by Pioul
    Two very similar projects I'm maintaining an online Markdown editor using Git as RCS (and accessorily made available on GitHub). From this web app, I've created a Chrome app: the code is the same, aside from some Chrome technicalities. I care about open sourcing these two projects. Still, the Chrome app's code being the same as the web app's except for some dull details, I've first chosen to (1) not publish the Chrome app on GitHub, and (2) not use Git to manage its code. Instead, I would manually review the web app's commits, then replicate the few changes in the Chrome app. … slightly drifting apart However, I've decided to add a feature to the Chrome app only. So, even though both codebases will remain broadly similar, they'll be diverging enough to make me reconsider the rationale behind my initial decision to not version control nor share the Chrome app's source code. Since I'm now willing to use Git to version control both apps, and that I want to share both of them on GitHub, how should I go about it? Should I use two different repositories, or one repo with two long-running branches? What would be the pros and cons of each approach in that context? What would be the easiest/fastest way to regularly "import" commits from the web app to the Chrome app, since the web app is going to remain the master branch? Is cherry-picking the only solution?

    Read the article

  • mysql cluster problem in ubuntu

    - by Firman
    I have a problem while installing and configuring mysql cluster runnign on ubuntu 10.10 This is configuration for Cluster management [NDBD DEFAULT] NoOfReplicas=2 DataMemory=10MB IndexMemory=25MB MaxNoOfTables=256 MaxNoOfOrderedIndexes=256 MaxNoOfUniqueHashIndexes=128 [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] [TCP DEFAULT] [NDB_MGMD] Id=1 # the NDB Management Node (this one) HostName=192.168.10.101 [NDBD] Id=2 # the first NDB Data Node HostName=192.168.10.11 DataDir= /var/lib/mysql-cluster [NDBD] Id=3 # the second NDB Data Node HostName=192.168.10.12 DataDir=/var/lib/mysql-cluster [MYSQLD] [MYSQLD] and this is configuration for both node : [mysqld] ndbcluster ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER [mysql_cluster] ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER After running all node and management, and I use ndb_mgm, the type 'show' command, and something appear like this : ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.10.11 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master) id=3 @192.168.10.12 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.10.101 (mysql-5.1.39 ndb-7.0.9) [mysqld(API)] 1 node(s) id=4 (not connected, accepting connect from 192.168.10.101) look at two last line.. not as what http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-servers.html look like (see at point 4) anyone have ever had this problem ?

    Read the article

  • How to manage enterprise network of Linux machines?

    - by killy9999
    I work at the university. In my institute we have six computer laboratories used for teaching. Each lab has almost 20 computers, which gives over 100 machines total. Computers have either Windows XP or Windows 7 Eneterprise operating system. We use Symantec Ghost to manage all the computers. Each computer has a Ghost client installed, which allows to control computers over network. Every six months we restore a master image on one of the computers in a lab, update that image and distribute it over the network to all computers in a laboratory. Thanks to Ghost client this is done automatically with just a few clicks. Recently I suggested that it would be good to have Linux installed in the laboratories. The administrators were concerned that we would not be able to manage that many computers if each would have to be updated manually. The question is: how to manage such a huge network of Linux machines in an automated way? To make the description of our network more complete I'll add that all students have their accounts (about few thousand users) on a central server. These are accessed via LDAP. To use a computer in laboratory each student has to log in using his own account.

    Read the article

  • Added splash screen code to my package

    - by Youssef
    Please i need support to added splash screen code to my package /* * T24_Transformer_FormView.java */ package t24_transformer_form; import org.jdesktop.application.Action; import org.jdesktop.application.ResourceMap; import org.jdesktop.application.SingleFrameApplication; import org.jdesktop.application.FrameView; import org.jdesktop.application.TaskMonitor; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.filechooser.FileNameExtensionFilter; import javax.swing.filechooser.FileFilter; // old T24 Transformer imports import java.io.File; import java.io.FileWriter; import java.io.StringWriter; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.Iterator; //import java.util.Properties; import java.util.StringTokenizer; import javax.swing.; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Result; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; import org.apache.log4j.Logger; import org.apache.log4j.PropertyConfigurator; import org.w3c.dom.Document; import org.w3c.dom.DocumentFragment; import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.NodeList; import com.ejada.alinma.edh.xsdtransform.util.ConfigKeys; import com.ejada.alinma.edh.xsdtransform.util.XSDElement; import com.sun.org.apache.xml.internal.serialize.OutputFormat; import com.sun.org.apache.xml.internal.serialize.XMLSerializer; /* * The application's main frame. */ public class T24_Transformer_FormView extends FrameView { /**} * static holders for application-level utilities * { */ //private static Properties appProps; private static Logger appLogger; /** * */ private StringBuffer columnsCSV = null; private ArrayList<String> singleValueTableColumns = null; private HashMap<String, String> multiValueTablesSQL = null; private HashMap<Object, HashMap<String, Object>> groupAttrs = null; private ArrayList<XSDElement> xsdElementsList = null; /** * initialization */ private void init() /*throws Exception*/ { // init the properties object //FileReader in = new FileReader(appConfigPropsPath); //appProps.load(in); // log4j.properties constant String PROP_LOG4J_CONFIG_FILE = "log4j.properties"; // init the logger if ((PROP_LOG4J_CONFIG_FILE != null) && (!PROP_LOG4J_CONFIG_FILE.equals(""))) { PropertyConfigurator.configure(PROP_LOG4J_CONFIG_FILE); if (appLogger == null) { appLogger = Logger.getLogger(T24_Transformer_FormView.class.getName()); } appLogger.info("Application initialization successful."); } columnsCSV = new StringBuffer(ConfigKeys.FIELD_TAG + "," + ConfigKeys.FIELD_NUMBER + "," + ConfigKeys.FIELD_DATA_TYPE + "," + ConfigKeys.FIELD_FMT + "," + ConfigKeys.FIELD_LEN + "," + ConfigKeys.FIELD_INPUT_LEN + "," + ConfigKeys.FIELD_GROUP_NUMBER + "," + ConfigKeys.FIELD_MV_GROUP_NUMBER + "," + ConfigKeys.FIELD_SHORT_NAME + "," + ConfigKeys.FIELD_NAME + "," + ConfigKeys.FIELD_COLUMN_NAME + "," + ConfigKeys.FIELD_GROUP_NAME + "," + ConfigKeys.FIELD_MV_GROUP_NAME + "," + ConfigKeys.FIELD_JUSTIFICATION + "," + ConfigKeys.FIELD_TYPE + "," + ConfigKeys.FIELD_SINGLE_OR_MULTI + System.getProperty("line.separator")); singleValueTableColumns = new ArrayList<String>(); singleValueTableColumns.add(ConfigKeys.COLUMN_XPK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); multiValueTablesSQL = new HashMap<String, String>(); groupAttrs = new HashMap<Object, HashMap<String, Object>>(); xsdElementsList = new ArrayList<XSDElement>(); } /** * initialize the <code>DocumentBuilder</code> and read the XSD file * * @param docPath * @return the <code>Document</code> object representing the read XSD file */ private Document retrieveDoc(String docPath) { Document xsdDoc = null; File file = new File(docPath); try { DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); xsdDoc = builder.parse(file); } catch (Exception e) { appLogger.error(e.getMessage()); } return xsdDoc; } /** * perform the iteration/modification on the document * iterate to the level which contains all the elements (Single-Value, and Groups) and start processing each * * @param xsdDoc * @return */ private Document processDoc(Document xsdDoc) { ArrayList<Object> newElementsList = new ArrayList<Object>(); HashMap<String, Object> docAttrMap = new HashMap<String, Object>(); Element sequenceElement = null; Element schemaElement = null; // get document's root element NodeList nodes = xsdDoc.getChildNodes(); for (int i = 0; i < nodes.getLength(); i++) { if (ConfigKeys.TAG_SCHEMA.equals(nodes.item(i).getNodeName())) { schemaElement = (Element) nodes.item(i); break; } } // process the document (change single-value elements, collect list of new elements to be added) for (int i1 = 0; i1 < schemaElement.getChildNodes().getLength(); i1++) { Node childLevel1 = (Node) schemaElement.getChildNodes().item(i1); // <ComplexType> element if (childLevel1.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { // first, get the main attributes and put it in the csv file for (int i6 = 0; i6 < childLevel1.getChildNodes().getLength(); i6++) { Node child6 = childLevel1.getChildNodes().item(i6); if (ConfigKeys.TAG_ATTRIBUTE.equals(child6.getNodeName())) { if (child6.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { String attrName = child6.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); if (((Element) child6).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).getLength() != 0) { Node simpleTypeElement = ((Element) child6).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE) .item(0); if (((Element) simpleTypeElement).getElementsByTagName(ConfigKeys.TAG_RESTRICTION).getLength() != 0) { Node restrictionElement = ((Element) simpleTypeElement).getElementsByTagName( ConfigKeys.TAG_RESTRICTION).item(0); if (((Element) restrictionElement).getElementsByTagName(ConfigKeys.TAG_MAX_LENGTH).getLength() != 0) { Node maxLengthElement = ((Element) restrictionElement).getElementsByTagName( ConfigKeys.TAG_MAX_LENGTH).item(0); HashMap<String, String> elementProperties = new HashMap<String, String>(); elementProperties.put(ConfigKeys.FIELD_TAG, attrName); elementProperties.put(ConfigKeys.FIELD_NUMBER, "0"); elementProperties.put(ConfigKeys.FIELD_DATA_TYPE, ConfigKeys.DATA_TYPE_XSD_STRING); elementProperties.put(ConfigKeys.FIELD_FMT, ""); elementProperties.put(ConfigKeys.FIELD_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_SHORT_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_COLUMN_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "S"); elementProperties.put(ConfigKeys.FIELD_LEN, maxLengthElement.getAttributes().getNamedItem( ConfigKeys.ATTR_VALUE).getNodeValue()); elementProperties.put(ConfigKeys.FIELD_INPUT_LEN, maxLengthElement.getAttributes() .getNamedItem(ConfigKeys.ATTR_VALUE).getNodeValue()); constructElementRow(elementProperties); // add the attribute as a column in the single-value table singleValueTableColumns.add(attrName + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_STRING + ConfigKeys.DELIMITER_COLUMN_TYPE + maxLengthElement.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE).getNodeValue()); // add the attribute as an element in the elements list addToElementsList(attrName, attrName); appLogger.debug("added attribute: " + attrName); } } } } } } // now, loop on the elements and process them for (int i2 = 0; i2 < childLevel1.getChildNodes().getLength(); i2++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(i2); // <Sequence> element if (childLevel2.getNodeName().equals(ConfigKeys.TAG_SEQUENCE)) { sequenceElement = (Element) childLevel2; for (int i3 = 0; i3 < childLevel2.getChildNodes().getLength(); i3++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(i3); // <Element> element if (childLevel3.getNodeName().equals(ConfigKeys.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { processGroup(childLevel3, true, null, null, docAttrMap, xsdDoc, newElementsList); // insert a new comment node with the contents of the group tag sequenceElement.insertBefore(xsdDoc.createComment(serialize(childLevel3)), childLevel3); // remove the group tag sequenceElement.removeChild(childLevel3); } else { processElement(childLevel3); } } } } } } } // add new elements // this step should be after finishing processing the whole document. when you add new elements to the document // while you are working on it, those new elements will be included in the processing. We don't need that! for (int i = 0; i < newElementsList.size(); i++) { sequenceElement.appendChild((Element) newElementsList.get(i)); } // write the new required attributes to the schema element Iterator<String> attrIter = docAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) docAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(ConfigKeys.TAG_ATTRIBUTE); appLogger.debug("appending attr. [" + attr.getAttribute(ConfigKeys.ATTR_NAME) + "]..."); newAttrElement.setAttribute(ConfigKeys.ATTR_NAME, attr.getAttribute(ConfigKeys.ATTR_NAME)); newAttrElement.setAttribute(ConfigKeys.ATTR_TYPE, attr.getAttribute(ConfigKeys.ATTR_TYPE)); schemaElement.appendChild(newAttrElement); } return xsdDoc; } /** * add a new <code>XSDElement</code> with the given <code>name</code> and <code>businessName</code> to * the elements list * * @param name * @param businessName */ private void addToElementsList(String name, String businessName) { xsdElementsList.add(new XSDElement(name, businessName)); } /** * add the given <code>XSDElement</code> to the elements list * * @param element */ private void addToElementsList(XSDElement element) { xsdElementsList.add(element); } /** * check if the <code>element</code> sent is single-value element or group * element. the comparison depends on the children of the element. if found one of type * <code>ComplexType</code> then it's a group element, and if of type * <code>SimpleType</code> then it's a single-value element * * @param element * @return <code>true</code> if the element is a group element, * <code>false</code> otherwise */ private boolean isGroup(Node element) { for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node child = (Node) element.getChildNodes().item(i); if (child.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { // found a ComplexType child (Group element) return true; } else if (child.getNodeName().equals(ConfigKeys.TAG_SIMPLE_TYPE)) { // found a SimpleType child (Single-Value element) return false; } } return false; /* String attrName = null; if (element.getAttributes() != null) { Node attribute = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); } } if (attrName.startsWith("g")) { // group element return true; } else { // single element return false; } */ } /** * process a group element. recursively, process groups till no more group elements are found * * @param element * @param isFirstLevelGroup * @param attrMap * @param docAttrMap * @param xsdDoc * @param newElementsList */ private void processGroup(Node element, boolean isFirstLevelGroup, Node parentGroup, XSDElement parentGroupElement, HashMap<String, Object> docAttrMap, Document xsdDoc, ArrayList<Object> newElementsList) { String elementName = null; HashMap<String, Object> groupAttrMap = new HashMap<String, Object>(); HashMap<String, Object> parentGroupAttrMap = new HashMap<String, Object>(); XSDElement groupElement = null; if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing group [" + elementName + "]..."); groupElement = new XSDElement(elementName, elementName); // get the attributes if a non-first-level-group // attributes are: groups's own attributes + parent group's attributes if (!isFirstLevelGroup) { // get the current element (group) attributes for (int i1 = 0; i1 < element.getChildNodes().getLength(); i1++) { if (ConfigKeys.TAG_COMPLEX_TYPE.equals(element.getChildNodes().item(i1).getNodeName())) { Node complexTypeNode = element.getChildNodes().item(i1); for (int i2 = 0; i2 < complexTypeNode.getChildNodes().getLength(); i2++) { if (ConfigKeys.TAG_ATTRIBUTE.equals(complexTypeNode.getChildNodes().item(i2).getNodeName())) { appLogger.debug("add group attr: " + ((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME)); groupAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); docAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); } } } } // now, get the parent's attributes parentGroupAttrMap = groupAttrs.get(parentGroup); if (parentGroupAttrMap != null) { Iterator<String> iter = parentGroupAttrMap.keySet().iterator(); while (iter.hasNext()) { String attrName = iter.next(); groupAttrMap.put(attrName, parentGroupAttrMap.get(attrName)); } } // add the attributes to the group element that will be added to the elements list Iterator<String> itr = groupAttrMap.keySet().iterator(); while(itr.hasNext()) { groupElement.addAttribute(itr.next()); } // put the attributes in the attributes map groupAttrs.put(element, groupAttrMap); } for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_SEQUENCE)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { // another group element.. // unfortunately, a recursion is // needed here!!! :-( processGroup(childLevel3, false, element, groupElement, docAttrMap, xsdDoc, newElementsList); } else { // reached a single-value element.. copy it under the // main sequence and apply the name<>shorname replacement processGroupElement(childLevel3, element, groupElement, isFirstLevelGroup, xsdDoc, newElementsList); } } } } } } } if (isFirstLevelGroup) { addToElementsList(groupElement); } else { parentGroupElement.addChild(groupElement); } appLogger.debug("finished processing group [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. replace the <code>name</code> attribute with the <code>shortname</code>. * * @param element */ private void processElement(Node element) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue()); if (attrName.equals(ConfigKeys.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } } } } } } } } } // replace the name attribute with the shortname if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).setNodeValue(fieldShortName); } elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "S"); constructElementRow(elementProperties); singleValueTableColumns.add(fieldShortName + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldInputLength); // add the element to elements list addToElementsList(fieldShortName, fieldColumnName); appLogger.debug("finished processing element [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. copy the element under the main sequence * 2. replace the <code>name</code> attribute with the <code>shortname</code>. * 3. add the attributes of the parent groups (if non-first-level-group) * * @param element */ private void processGroupElement(Node element, Node parentGroup, XSDElement parentGroupElement, boolean isFirstLevelGroup, Document xsdDoc, ArrayList<Object> newElementsList) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; Element newElement = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); ArrayList<String> tableColumns = new ArrayList<String>(); HashMap<String, Object> groupAttrMap = null; if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); // 1. copy the element newElement = (Element) element.cloneNode(true); newElement.setAttribute(ConfigKeys.ATTR_MAX_OCCURS, "unbounded"); // 2. if non-first-level-group, replace the element's SimpleType tag with a ComplexType tag if (!isFirstLevelGroup) { if (((Element) newElement).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).getLength() != 0) { // there should be only one tag of SimpleType Node simpleTypeNode = ((Element) newElement).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).item(0); // create the new ComplexType element Element complexTypeNode = xsdDoc.createElement(ConfigKeys.TAG_COMPLEX_TYPE); complexTypeNode.setAttribute(ConfigKeys.ATTR_MIXED, "true"); // get the list of attributes for the parent group groupAttrMap = groupAttrs.get(parentGroup); Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(ConfigKeys.TAG_ATTRIBUTE); appLogger.debug("adding attr. [" + attr.getAttribute(ConfigKeys.ATTR_NAME) + "]..."); newAttrElement.setAttribute(ConfigKeys.ATTR_REF, attr.getAttribute(ConfigKeys.ATTR_NAME)); newAttrElement.setAttribute(ConfigKeys.ATTR_USE, "optional"); complexTypeNode.appendChild(newAttrElement); } // replace the old SimpleType node with the new ComplexType node newElement.replaceChild(complexTypeNode, simpleTypeNode); } } // 3. replace the name with the shortname in the new element for (int i = 0; i < newElement.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) newElement.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue()); if (attrName.equals(ConfigKeys.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } } } } } } } } } if (newElement.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { newElement.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).setNodeValue(fieldShortName); } // 4. save the new element to be added to the sequence list newElementsList.add(newElement); elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "M"); constructElementRow(elementProperties); // create the MULTI-VALUE table // 0. Primary Key tableColumns.add(ConfigKeys.COLUMN_XPK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_STRING + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.COLUMN_XPK_ROW_LENGTH); // 1. foreign key tableColumns.add(ConfigKeys.COLUMN_FK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); // 2. field value tableColumns.add(fieldShortName + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldInputLength); // 3. attributes if (groupAttrMap != null) { Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while (attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); tableColumns.add(attr.getAttribute(ConfigKeys.ATTR_NAME) + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); } } multiValueTablesSQL.put(sub_table_prefix.getText() + fieldShortName, constructMultiValueTableSQL( sub_table_prefix.getText() + fieldShortName, tableColumns)); // add the element to it's parent group children parentGroupElement.addChild(new XSDElement(fieldShortName, fieldColumnName)); appLogger.debug("finished processing element [" + elementName + "]."); } /** * write resulted files * * @param xsdDoc * @param docPath */ private void writeResults(Document xsdDoc, String resultsDir, String newXSDFileName, String csvFileName) { String rsDir = resultsDir + File.separator + new SimpleDateFormat("yyyyMMdd-HHmm").format(new Date()); try { File resultsDirFile = new File(rsDir); if (!resultsDirFile.exists()) { resultsDirFile.mkdirs(); } // write the XSD doc appLogger.info("writing the transformed XSD..."); Source source = new DOMSource(xsdDoc); Result result = new StreamResult(rsDir + File.separator + newXSDFileName); Transformer xformer = TransformerFactory.newInstance().newTransformer(); // xformer.setOutputProperty("indent", "yes"); xformer.transform(source, result); appLogger.info("finished writing the transformed XSD."); // write the CSV columns file appLogger.info("writing the CSV file..."); FileWriter csvWriter = new FileWriter(rsDir + File.separator + csvFileName); csvWriter.write(columnsCSV.toString()); csvWriter.close(); appLogger.info("finished writing the CSV file."); // write the master single-value table appLogger.info("writing the creation script for master table (single-values)..."); FileWriter masterTableWriter = new FileWriter(rsDir + File.separator + main_edh_table_name.getText() + ".sql"); masterTableWriter.write(constructSingleValueTableSQL(main_edh_table_name.getText(), singleValueTableColumns)); masterTableWriter.close(); appLogger.info("finished writing the creation script for master table (single-values)."); // write the multi-value tables sql appLogger.info("writing the creation script for slave tables (multi-values)..."); Iterator<String> iter = multiValueTablesSQL.keySet().iterator(); while (iter.hasNext()) { String tableName = iter.next(); String sql = multiValueTablesSQL.get(tableName); FileWriter tableSQLWriter = new FileWriter(rsDir + File.separator + tableName + ".sql"); tableSQLWriter.write(sql); tableSQLWriter.close(); } appLogger.info("finished writing the creation script for slave tables (multi-values)."); // write the single-value view appLogger.info("writing the creation script for single-value selection view..."); FileWriter singleValueViewWriter = new FileWriter(rsDir + File.separator + view_name_single.getText() + ".sql"); singleValueViewWriter.write(constructViewSQL(ConfigKeys.SQL_VIEW_SINGLE)); singleValueViewWriter.close(); appLogger.info("finished writing the creation script for single-value selection view."); // debug for (int i = 0; i < xsdElementsList.size(); i++) { getMultiView(xsdElementsList.get(i)); /*// if (xsdElementsList.get(i).getAllDescendants() != null) { // for (int j = 0; j < xsdElementsList.get(i).getAllDescendants().size(); j++) { // appLogger.debug(main_edh_table_name.getText() + "." + ConfigKeys.COLUMN_XPK_ROW // + "=" + xsdElementsList.get(i).getAllDescendants().get(j).getName() + "." + ConfigKeys.COLUMN_FK_ROW); // } // } */ } } catch (Exception e) { appLogger.error(e.getMessage()); } } private String getMultiView(XSDElement element)

    Read the article

  • Randomly displayed flashing lines, no response to all shortcuts, just power off. [syslog included]

    - by B. Roland
    Hello! I have an old machine, and I want to use for that to learn employees how to use Ubuntu, and to be easyer to switch from Windows. I've been installed 10.04, and updated, but this strange stuff is happend. Graphical installion failed, same strange thing. With alternate workd. Sometimes, when I boot up, a boot message displayed: Keyboard failure..., often diplayed after reboot, and after shutdown, when I haven't plugged off from AC. I replaced the keyboard yet, same failure... If I powered off, and plugged off from AC, no keyboard problems displayed in boot time. Details Configuration: Dell OptiPlex GX60 - in original cover, no changes. 256 MB DDR 166 MHz Intel® Celeron® Processor 2.40 GHz Dell 0C3207 Base Board I know, that is not enough, but I have three other Nec compuers, with nearly similar config, and they works well with 9.10, 10.04, 10.10. Live CDs I've been tried with 10.04 and 10.10, but the problem is displayed too. With 9.10 no strange things displayed, but it froze, during a simple apt-get install. Syslog An error loop is logged here, but I paste the whole startup and error lines. The flashing lines are displayed sometimes immediately after login, but sometimes after 10 minutes, but once occured, that nothing happend. Strange thing is displayed immediately after login: here. An other boot, after some minutes, strange lines, and loop in log appeard: here. The loop should be that: Jan 23 00:20:08 machine_name kernel: [ 46.782212] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 23 00:20:08 machine_name kernel: [ 47.100033] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 23 00:20:08 machine_name kernel: [ 47.100045] render error detected, EIR: 0x00000000 Jan 23 00:20:08 machine_name kernel: [ 47.101487] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 16 at 9) Jan 23 00:20:11 machine_name kernel: [ 49.152020] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 23 00:20:11 machine_name gdm-simple-slave[1245]: WARNING: Unable to load file '/etc/gdm/custom.conf': No such file or directory Jan 23 00:20:11 machine_name acpid: client 1239[0:0] has disconnected Jan 23 00:20:11 machine_name acpid: client connected from 1247[0:0] Jan 23 00:20:11 machine_name acpid: 1 client rule loaded UPDATE Added syslog things: before errors, error loop, the complete shutdown(after the big updates): Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully called chroot. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully dropped privileges. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully limited resources. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Watchdog thread running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Canary thread running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully made thread 1337 of process 1337 (n/a) owned by '1001' high priority at nice level -11. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Supervising 1 threads of 1 processes of 1 users. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Sucessfully made thread 1345 of process 1337 (n/a) owned by '1001' RT at priority 5. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Supervising 2 threads of 1 processes of 1 users. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Sucessfully made thread 1349 of process 1337 (n/a) owned by '1001' RT at priority 5. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Supervising 3 threads of 1 processes of 1 users. Jan 28 20:40:37 machine_name pulseaudio[1337]: ratelimit.c: 2 events suppressed Jan 28 20:41:33 machine_name AptDaemon: INFO: Initializing daemon Jan 28 20:41:44 machine_name kernel: [ 167.691563] lo: Disabled Privacy Extensions Jan 28 20:47:33 machine_name AptDaemon: INFO: Quiting due to inactivity Jan 28 20:47:33 machine_name AptDaemon: INFO: Shutdown was requested Jan 28 20:59:50 machine_name kernel: [ 1253.840513] lo: Disabled Privacy Extensions Jan 28 21:17:02 machine_name CRON[1874]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 21:17:38 machine_name kernel: [ 2321.553239] lo: Disabled Privacy Extensions Jan 28 22:07:44 machine_name kernel: [ 5327.840254] lo: Disabled Privacy Extensions Jan 28 22:17:02 machine_name CRON[2665]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: Called Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: username = [some_user] Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: /home/some_user is already mounted Jan 28 22:57:03 machine_name kernel: [ 8286.641472] lo: Disabled Privacy Extensions Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: Called Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: username = [some_user] Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: /home/some_user is already mounted Jan 28 23:07:42 machine_name kernel: [ 8925.272030] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:07:42 machine_name kernel: [ 8925.272048] render error detected, EIR: 0x00000000 Jan 28 23:07:42 machine_name kernel: [ 8925.272093] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171453 at 171452) Jan 28 23:07:45 machine_name kernel: [ 8928.868041] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 28 23:08:10 machine_name acpid: client 925[0:0] has disconnected Jan 28 23:08:10 machine_name acpid: client connected from 8127[0:0] Jan 28 23:08:10 machine_name acpid: 1 client rule loaded Jan 28 23:08:11 machine_name kernel: [ 8955.046248] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 28 23:08:12 machine_name kernel: [ 8955.364016] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:08:12 machine_name kernel: [ 8955.364027] render error detected, EIR: 0x00000000 Jan 28 23:08:12 machine_name kernel: [ 8955.364407] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171457 at 171452) Jan 28 23:08:14 machine_name kernel: [ 8957.472025] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 28 23:08:14 machine_name acpid: client 8127[0:0] has disconnected Jan 28 23:08:14 machine_name acpid: client connected from 8141[0:0] Jan 28 23:08:14 machine_name acpid: 1 client rule loaded Jan 28 23:08:15 machine_name kernel: [ 8958.671722] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 28 23:08:15 machine_name kernel: [ 8958.988015] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:08:15 machine_name kernel: [ 8958.988026] render error detected, EIR: 0x00000000 Jan 28 23:08:15 machine_name kernel: [ 8958.989400] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171459 at 171452) Jan 28 23:08:16 machine_name init: tty4 main process (848) killed by TERM signal Jan 28 23:08:16 machine_name init: tty5 main process (856) killed by TERM signal Jan 28 23:08:16 machine_name NetworkManager: nm_signal_handler(): Caught signal 15, shutting down normally. Jan 28 23:08:16 machine_name init: tty2 main process (874) killed by TERM signal Jan 28 23:08:16 machine_name init: tty3 main process (875) killed by TERM signal Jan 28 23:08:16 machine_name init: tty6 main process (877) killed by TERM signal Jan 28 23:08:16 machine_name init: cron main process (890) killed by TERM signal Jan 28 23:08:16 machine_name init: tty1 main process (1146) killed by TERM signal Jan 28 23:08:16 machine_name avahi-daemon[644]: Got SIGTERM, quitting. Jan 28 23:08:16 machine_name avahi-daemon[644]: Leaving mDNS multicast group on interface eth0.IPv4 with address 10.238.11.134. Jan 28 23:08:16 machine_name acpid: exiting Jan 28 23:08:16 machine_name init: avahi-daemon main process (644) terminated with status 255 Jan 28 23:08:17 machine_name kernel: Kernel logging (proc) stopped. Jan 28 23:09:00 machine_name kernel: imklog 4.2.0, log source = /proc/kmsg started. Jan 28 23:09:00 machine_name rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="516" x-info="http://www.rsyslog.com"] (re)start Jan 28 23:09:00 machine_name rsyslogd: rsyslogd's groupid changed to 103 Jan 28 23:09:00 machine_name rsyslogd: rsyslogd's userid changed to 101 Jan 28 23:09:00 machine_name rsyslogd-2039: Could no open output file '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] When I hit the On/Off button, the system shuts down normally. May be it a hardware problem, but I don't know... Can you say something useful to solve my problem?

    Read the article

  • bluetooth not working on Ubuntu 13.10

    - by iacopo
    I upgrated ubuntu from 13.4 to 13.10 and my bluetooth stopped working. When I open bluetooth I'm able to put it ON but the visibility doesn't show anything and didn't detect any device. when I: dmesg | grep Blue [ 2.046249] usb 3-1: Product: Bluetooth V2.0 Dongle [ 2.046252] usb 3-1: Manufacturer: Bluetooth v2.0 [ 15.255710] Bluetooth: Core ver 2.16 [ 15.255748] Bluetooth: HCI device and connection manager initialized [ 15.255759] Bluetooth: HCI socket layer initialized [ 15.255765] Bluetooth: L2CAP socket layer initialized [ 15.255776] Bluetooth: SCO socket layer initialized [ 20.110379] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.110386] Bluetooth: BNEP filters: protocol multicast [ 20.110400] Bluetooth: BNEP socket layer initialized [ 20.120635] Bluetooth: RFCOMM TTY layer initialized [ 20.120656] Bluetooth: RFCOMM socket layer initialized [ 20.120660] Bluetooth: RFCOMM ver 1.11 when I digit: lsusb Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 0bc2:2300 Seagate RSS LLC Expansion Portable Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 002: ID 0e6a:6001 Megawin Technology Co., Ltd GEMBIRD Flexible keyboard KB-109F-B-DE Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 002: ID 13ee:0001 MosArt Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub when I: hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: 00:1B:10:00:2A:EC ACL MTU: 1017:8 SCO MTU: 64:0 DOWN RX bytes:457 acl:0 sco:0 events:16 errors:0 TX bytes:68 acl:0 sco:0 commands:16 errors:0 Features: 0xff 0xff 0x8d 0xfe 0x9b 0xf9 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: Link mode: SLAVE ACCEPT when I digit: rfkill list 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no when I digit: sudo gedit /etc/bluetooth/main.conf [General] # List of plugins that should not be loaded on bluetoothd startup #DisablePlugins = network,input # Default adaper name # %h - substituted for hostname # %d - substituted for adapter id Name = %h-%d # Default device class. Only the major and minor device class bits are # considered. Class = 0x000100 # How long to stay in discoverable mode before going back to non-discoverable # The value is in seconds. Default is 180, i.e. 3 minutes. # 0 = disable timer, i.e. stay discoverable forever DiscoverableTimeout = 0 # How long to stay in pairable mode before going back to non-discoverable # The value is in seconds. Default is 0. # 0 = disable timer, i.e. stay pairable forever PairableTimeout = 0 # Use some other page timeout than the controller default one # which is 16384 (10 seconds). PageTimeout = 8192 # Automatic connection for bonded devices driven by platform/user events. # If a platform plugin uses this mechanism, automatic connections will be # enabled during the interval defined below. Initially, this feature # intends to be used to establish connections to ATT channels. AutoConnectTimeout = 60 # What value should be assumed for the adapter Powered property when # SetProperty(Powered, ...) hasn't been called yet. Defaults to true InitiallyPowered = true # Remember the previously stored Powered state when initializing adapters RememberPowered = false # Use vendor id source (assigner), vendor, product and version information for # DID profile support. The values are separated by ":" and assigner, VID, PID # and version. # Possible vendor id source values: bluetooth, usb (defaults to usb) #DeviceID = bluetooth:1234:5678:abcd # Do reverse service discovery for previously unknown devices that connect to # us. This option is really only needed for qualification since the BITE tester # doesn't like us doing reverse SDP for some test cases (though there could in # theory be other useful purposes for this too). Defaults to true. ReverseServiceDiscovery = true # Enable name resolving after inquiry. Set it to 'false' if you don't need # remote devices name and want shorter discovery cycle. Defaults to 'true'. NameResolving = true # Enable runtime persistency of debug link keys. Default is false which # makes debug link keys valid only for the duration of the connection # that they were created for. DebugKeys = false # Enable the GATT functionality. Default is false EnableGatt = false when I digit: dmesg | grep Bluetooth [ 2.013041] usb 3-1: Product: Bluetooth V2.0 Dongle [ 2.013049] usb 3-1: Manufacturer: Bluetooth v2.0 [ 13.798293] Bluetooth: Core ver 2.16 [ 13.798338] Bluetooth: HCI device and connection manager initialized [ 13.798352] Bluetooth: HCI socket layer initialized [ 13.798357] Bluetooth: L2CAP socket layer initialized [ 13.798368] Bluetooth: SCO socket layer initialized [ 20.184162] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.184173] Bluetooth: BNEP filters: protocol multicast [ 20.184197] Bluetooth: BNEP socket layer initialized [ 20.238947] Bluetooth: RFCOMM TTY layer initialized [ 20.238983] Bluetooth: RFCOMM socket layer initialized [ 20.239018] Bluetooth: RFCOMM ver 1.11 When I digit: uname -a Linux casa-desktop 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux When I digit: lsmod Module Size Used by parport_pc 32701 0 rfcomm 69070 4 bnep 19564 2 ppdev 17671 0 ip6t_REJECT 12910 1 xt_hl 12521 6 ip6t_rt 13507 3 nf_conntrack_ipv6 18938 9 nf_defrag_ipv6 34616 1 nf_conntrack_ipv6 ipt_REJECT 12541 1 xt_LOG 17718 8 xt_limit 12711 11 xt_tcpudp 12884 32 xt_addrtype 12635 4 nf_conntrack_ipv4 15012 9 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 xt_conntrack 12760 18 ip6table_filter 12815 1 ip6_tables 27025 1 ip6table_filter nf_conntrack_netbios_ns 12665 0 nf_conntrack_broadcast 12589 1 nf_conntrack_netbios_ns nf_nat_ftp 12741 0 nf_nat 26653 1 nf_nat_ftp kvm_amd 59958 0 nf_conntrack_ftp 18608 1 nf_nat_ftp kvm 431315 1 kvm_amd nf_conntrack 91736 8 nf_nat_ftp,nf_conntrack_netbios_ns,nf_nat,xt_conntrack,nf_conntrack_broadcast,nf_conntrack_ftp,nf_conntrack_ipv4,nf_conntrack_ipv6 iptable_filter 12810 1 crct10dif_pclmul 14289 0 crc32_pclmul 13113 0 ip_tables 27239 1 iptable_filter snd_hda_codec_realtek 55704 1 ghash_clmulni_intel 13259 0 aesni_intel 55624 0 aes_x86_64 17131 1 aesni_intel snd_hda_codec_hdmi 41117 1 x_tables 34059 13 ip6table_filter,xt_hl,ip_tables,xt_tcpudp,xt_limit,xt_conntrack,xt_LOG,iptable_filter,ip6t_rt,ipt_REJECT,ip6_tables,xt_addrtype,ip6t_REJECT lrw 13257 1 aesni_intel snd_hda_intel 48171 5 gf128mul 14951 1 lrw glue_helper 13990 1 aesni_intel ablk_helper 13597 1 aesni_intel joydev 17377 0 cryptd 20329 3 ghash_clmulni_intel,aesni_intel,ablk_helper snd_hda_codec 188738 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel arc4 12608 2 snd_hwdep 13602 1 snd_hda_codec rt2800pci 18690 0 snd_pcm 102033 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel radeon 1402449 3 rt2800lib 79963 1 rt2800pci btusb 28267 0 rt2x00pci 13287 1 rt2800pci rt2x00mmio 13603 1 rt2800pci snd_page_alloc 18710 2 snd_pcm,snd_hda_intel rt2x00lib 55238 4 rt2x00pci,rt2800lib,rt2800pci,rt2x00mmio snd_seq_midi 13324 0 mac80211 596969 3 rt2x00lib,rt2x00pci,rt2800lib snd_seq_midi_event 14899 1 snd_seq_midi ttm 83995 1 radeon snd_rawmidi 30095 1 snd_seq_midi cfg80211 479757 2 mac80211,rt2x00lib drm_kms_helper 52651 1 radeon snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi bluetooth 371880 12 bnep,btusb,rfcomm microcode 23518 0 eeprom_93cx6 13344 1 rt2800pci snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi crc_ccitt 12707 1 rt2800lib snd_timer 29433 2 snd_pcm,snd_seq snd 69141 21 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi psmouse 97626 0 drm 296739 5 ttm,drm_kms_helper,radeon k10temp 13126 0 soundcore 12680 1 snd serio_raw 13413 0 i2c_algo_bit 13413 1 radeon i2c_piix4 22106 0 video 19318 0 mac_hid 13205 0 lp 17759 0 parport 42299 3 lp,ppdev,parport_pc hid_generic 12548 0 usbhid 53014 0 hid 105818 2 hid_generic,usbhid pata_acpi 13038 0 usb_storage 62062 1 r8169 67341 0 sdhci_pci 18985 0 sdhci 42630 1 sdhci_pci mii 13934 1 r8169 pata_atiixp 13242 0 ohci_pci 13561 0 ahci 25819 2 libahci 31898 1 ahci Someone can help me?

    Read the article

  • My Thoughts On the Xbox 180

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/21/my-thoughts-on-the-xbox-180.aspx Everyone seems to be putting their 0.00237 cents into the wishing well over Microsoft's recent decision to reverse the DRM policy on the Xbox One. However, there have been a few issues that nobody has touched. As such, I have decided to dig 0.00237 cents out of my pocket. First, let me be clear about this point. I do not support the decision to reverse the DRM policy on the Xbox One. I wanted that point to be expressed first and unambiguously. I will say it again. I do not support the decision to reverse the DRM policy on the Xbox One. Now that I have that out of the way, let me go into my rationale. This decision removes most of the cool features that enticed me to pre-order the console. No, I didn't cancel my pre-order. There is still five months before the release of the console, and there is still a plethora of information that we, as consumers, do not have. With that, it should be noted that much of the talk in this post is speculation and rhetoric. I do not have any insider information that you do not possess. The persistent connection would have allowed the console to do many of the functions for which we have been begging. That demo where someone was playing Ryse, seamlessly accepted a multiplayer challenge in Killer Instinct, played the match (and a rematch,) and then jumped back into Ryse. That's gone, if you bought the game on disc. The new, DRM free system will require the disc in the system to play a game. That bullet point where one Xbox Live account could have up to 10 slave accounts so families could play together, no matter where they were located. That's gone as well. The promise of huge, expansive, dynamically changing worlds that was brought to us with the power of cloud computing. Well, "the people" didn't want there to be a forced, persistent connection. As such, developers can't rely on a connection and, as such, that feature is gone. This is akin to the removal of the hard drive on the Xbox 360. The list continues, but the enthusiast press has enumerated the list far better than I wish. All of this is because the Xbox team saw the HUGE success of Steam and decided to borrow a few ideas. Yes, Steam. The service that everyone hated for the first six months (for the same reasons the Xbox One is getting flack.) There was an initial growing pain. However, it is now lauded as the way games distribution should be handled. Unless you are Microsoft. I do find it curious that many of the features were originally announced for the PS4 during its unveiling. However, much of that was left strangely absent for Sony's E3 press conference. Instead, we received a single, static slide that basically said the exact opposite of Microsoft's plans. It is not farfetched to believe that slide came into existence during the approximately seven hours between the two media briefings. The thing that majorly annoys me over this whole kerfuffle is that the single thing that caused the call to arms is, really, not an issue. Microsoft never said they were going to block used sales. They said it was up to the publisher to make that decision. This would have allowed publishers to reclaim some of the costs of development in subsequent sales of the product. If you sell your game to GameStop for 7 USD, GameStop is going to sell it for 55 USD. That is 48 USD pure profit for them. Some publishers asked GameStop for a small cut. Was this a huge, money grubbing scheme? Well, yes, but the idea was that they have to handle server infrastructure for dormant accounts, etc. Of course, GameStop flatly refused, and the Online Pass was born. Fortunately, this trend didn’t last, and most publishers have stopped the practice. The ability to sell "licenses" has already begun to be challenged. Are you living in the EU? If so, companies must allow you to sell digital property. With this precedent in place, it's only a matter of time before other areas follow suit. If GameStop were smart, they should have immediately contacted every publisher out there to get the rights to become a clearing house for these licenses. Then, they keep their business model and could reduce their brick and mortar footprint. The digital landscape is changing. We need to not block this process. As Seth MacFarlane best said "Some issues are so important that you should drag people kicking and screaming." I believe this was said on an episode of Real Time with Bill Maher about the issue of Gay Marriages. Much like the original source, this is an issue that we need to drag people to the correct, progressive position. Microsoft, as a company, actually has the resources to weather the transition period. They have a great pool of first and second party developers that can leverage this new framework to prove the validity. Over time, the third party developers will get excited to use these tools. As an old C++ guy, I resisted C# for years. Now, I think it's one of the best languages I've ever used. I have a server room and a Co-Lo full of servers, so I originally didn't see the value in Azure. Now, I wish I could move every one of my projects into the cloud. I still LOVE getting physical packaging, which my music and games collection will proudly attest. However, I have started to see the value in pure digital, and have found ways to integrate this into the ways I consume those products. I can, honestly, understand how some parts of the population would be very apprehensive about this new landscape. There were valid arguments about people with no internet access. There are ways to combat these problems. These methods do not require us to throw the baby out with the bathwater. However, the number of people in the computer industry that I have seen cry foul is truly appalling. We are the forward looking people that help show how technology can improve people's lives. If we can't see the value of the brief pain involved with an exciting new ecosystem, than who will?

    Read the article

  • Including an embedded framework using a cross-project-reference: Header no such file or directory

    - by d11wtq
    I'm trying to create a Cocoa framework by using a cross-project reference in Xcode. I have 2 projects: one for the framework; one for the application that will use the framework. This framework is not intended to be stored within the system; it is an embedded framework that lives within the application bundle. I have successfully made the cross-project reference, marked the framework as being a dependency of my target, added a Copy Files build phase that puts the framework in Contents/Frameworks/ and added the framework to the linker phase (I checked the little "Target" checkbox; I've also done it manually by dragging the framework into the linker phase). My framework's install directory is correctly set to @executable_path/../Frameworks. However, when I try to build my app it: a) Correctly builds the framework first b) Correctly copies the framework c) Errors because it cannot find the master header file in my framework I have verified that the header is there. I can see it in the app product that is partially built. ls build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework/Headers/Cioccolata.h build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework/Headers/Cioccolata.h I have been able to successfully build the app by copying my framework into /Library/Frameworks (I can then delete it again after the successful build), but this is a workaround, I'm looking to find it out why Xcode doesn't find the framework's master header file without it being copied to a system directory. Is copying it to the app bundle during the build not sufficient? Here's the full build transcript if it's any help (it's just a Hello World app right now, so not much going on here): Build Cioccolata of project Cioccolata with configuration Debug SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/Current A cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf A /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/Current SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Resources Versions/Current/Resources cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Resources /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Resources SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Headers Versions/Current/Headers cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Headers /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Headers SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Cioccolata Versions/Current/Cioccolata cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Cioccolata /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Cioccolata ProcessInfoPlistFile /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/Info.plist Info.plist cd /Users/chris/Projects/Mac/Cioccolata builtin-infoPlistUtility Info.plist -expandbuildsettings -platform macosx -o /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/Info.plist CpHeader build/Debug/Cioccolata.framework/Versions/A/Headers/CWHelloWorld.h CWHelloWorld.h cd /Users/chris/Projects/Mac/Cioccolata /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.h /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Headers CpHeader build/Debug/Cioccolata.framework/Versions/A/Headers/Cioccolata.h Cioccolata.h cd /Users/chris/Projects/Mac/Cioccolata /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/Cioccolata.h /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Headers CopyStringsFile /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/English.lproj/InfoPlist.strings English.lproj/InfoPlist.strings cd /Users/chris/Projects/Mac/Cioccolata setenv ICONV /usr/bin/iconv /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copystrings --validate --inputencoding utf-8 --outputencoding UTF-16 English.lproj/InfoPlist.strings --outdir /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/English.lproj ProcessPCH /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch.gch Cioccolata_Prefix.pch normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/Cioccolata setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c-header -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-generated-files.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-own-target-headers.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-project-headers.hmap -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -I/Users/chris/Projects/Mac/Cioccolata/build/Debug/include -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources -c /Users/chris/Projects/Mac/Cioccolata/Cioccolata_Prefix.pch -o /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch.gch CompileC build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/CWHelloWorld.o /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.m normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/Cioccolata setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-generated-files.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-own-target-headers.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-project-headers.hmap -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -I/Users/chris/Projects/Mac/Cioccolata/build/Debug/include -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources -include /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch -c /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.m -o /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/CWHelloWorld.o Ld /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Cioccolata normal i386 cd /Users/chris/Projects/Mac/Cioccolata setenv MACOSX_DEPLOYMENT_TARGET 10.5 /Developer/usr/bin/gcc-4.2 -arch i386 -dynamiclib -isysroot /Developer/SDKs/MacOSX10.5.sdk -L/Users/chris/Projects/Mac/Cioccolata/build/Debug -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -filelist /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/Cioccolata.LinkFileList -install_name @executable_path/../Frameworks/Cioccolata.framework/Versions/A/Cioccolata -mmacosx-version-min=10.5 -framework Foundation -single_module -compatibility_version 1 -current_version 1 -o /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Cioccolata Touch /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework cd /Users/chris/Projects/Mac/Cioccolata /usr/bin/touch -c /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework Build CioccolataTest of project CioccolataTest with configuration Debug ProcessInfoPlistFile /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Info.plist Info.plist cd /Users/chris/Projects/Mac/CioccolataTest builtin-infoPlistUtility Info.plist -expandbuildsettings -platform macosx -o /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Info.plist PBXCp build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework cd /Users/chris/Projects/Mac/CioccolataTest /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Frameworks CopyStringsFile /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Resources/English.lproj/InfoPlist.strings English.lproj/InfoPlist.strings cd /Users/chris/Projects/Mac/CioccolataTest setenv ICONV /usr/bin/iconv /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copystrings --validate --inputencoding utf-8 --outputencoding UTF-16 English.lproj/InfoPlist.strings --outdir /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Resources/English.lproj CompileC build/CioccolataTest.build/Debug/CioccolataTest.build/Objects-normal/i386/main.o main.m normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/CioccolataTest setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-generated-files.hmap -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-own-target-headers.hmap -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-project-headers.hmap -F/Users/chris/Projects/Mac/CioccolataTest/build/Debug -I/Users/chris/Projects/Mac/CioccolataTest/build/Debug/include -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/DerivedSources -include /Users/chris/Projects/Mac/CioccolataTest/prefix.pch -c /Users/chris/Projects/Mac/CioccolataTest/main.m -o /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/Objects-normal/i386/main.o In file included from <command-line>:0: /Users/chris/Projects/Mac/CioccolataTest/prefix.pch:13:35: error: Cioccolata/Cioccolata.h: No such file or directory /Users/chris/Projects/Mac/CioccolataTest/main.m: In function 'main': /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: 'CWHelloWorld' undeclared (first use in this function) /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: (Each undeclared identifier is reported only once /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: for each function it appears in.) /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: 'hello' undeclared (first use in this function)

    Read the article

  • C# ASP.NET AJAX CascadingDropDown Selected value propriety problem

    - by Eyla
    Greetings, I have a problem to use selected value propriety of CascadingDropDown. I have 3 asp dropdown controls with ajax CascadingDropDown for each one of them. I have no problem to bind data to the 3 CascadingDropDown but my problem is to rebind CascadingDropDown. simply what I want to do is to select a record from Gridview which has the selected values for the CascadingDropDown that I want to pass then rebind the CascadingDropDown with selected value. I'm posting my code down which include: 1-ASP.NET code. 2-Code behind to handle selected record from grid view. 3- web servisice that handle binding data to the 3 CascadingDropDown. please advice how to rebind data to CascadingDropDown with selected value. by the way I used selected value proprety as showning in my code but it is not working and there is no error. Thank you, ........................ ASP.NET code ........................ <%@ Page Title="" Language="C#" MasterPageFile="~/Master.Master" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="IMAM_APPLICATION.WebForm1" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="idcontact_info" DataSourceID="ObjectDataSource1" onselectedindexchanged="GridView1_SelectedIndexChanged"> <Columns> <asp:CommandField ShowSelectButton="True" /> <asp:BoundField DataField="idcontact_info" HeaderText="idcontact_info" InsertVisible="False" ReadOnly="True" SortExpression="idcontact_info" /> <asp:BoundField DataField="Work_Field" HeaderText="Work_Field" SortExpression="Work_Field" /> <asp:BoundField DataField="Occupation" HeaderText="Occupation" SortExpression="Occupation" /> <asp:BoundField DataField="sub_Occupation" HeaderText="sub_Occupation" SortExpression="sub_Occupation" /> </Columns> </asp:GridView> <asp:Label ID="lbl" runat="server" Text="Label"></asp:Label> <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" DeleteMethod="Delete" InsertMethod="Insert" OldValuesParameterFormatString="original_{0}" SelectMethod="GetData" TypeName="IMAM_APPLICATION.DSContactTableAdapters.contact_infoTableAdapter" UpdateMethod="Update"> <DeleteParameters> <asp:Parameter Name="Original_idcontact_info" Type="Int32" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="Work_Field" Type="String" /> <asp:Parameter Name="Occupation" Type="String" /> <asp:Parameter Name="sub_Occupation" Type="String" /> <asp:Parameter Name="Original_idcontact_info" Type="Int32" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="Work_Field" Type="String" /> <asp:Parameter Name="Occupation" Type="String" /> <asp:Parameter Name="sub_Occupation" Type="String" /> </InsertParameters> </asp:ObjectDataSource> <asp:DropDownList ID="cmbWorkField" runat="server" Style="top: 715px; left: 180px; position: absolute; height: 22px; width: 126px"> </asp:DropDownList> <asp:DropDownList runat="server" ID="cmbOccupation" Style="top: 745px; left: 180px; position: absolute; height: 22px; width: 77px"> </asp:DropDownList> <asp:DropDownList ID="cmbSubOccup" runat="server" style="position:absolute; top: 775px; left: 180px;"> </asp:DropDownList> <cc1:CascadingDropDown ID="cmbWorkField_CascadingDropDown" runat="server" TargetControlID="cmbWorkField" Category="WorkField" LoadingText="Please Wait ..." PromptText="Select Wor kField ..." ServiceMethod="GetWorkField" ServicePath="ServiceTags.asmx"> </cc1:CascadingDropDown> <cc1:CascadingDropDown ID="cmbOccupation_CascadingDropDown" runat="server" TargetControlID="cmbOccupation" Category="Occup" LoadingText="Please wait..." PromptText="Select Occup ..." ServiceMethod="GetOccup" ServicePath="ServiceTags.asmx" ParentControlID="cmbWorkField"> </cc1:CascadingDropDown> <cc1:CascadingDropDown ID="cmbSubOccup_CascadingDropDown" runat="server" Category="SubOccup" Enabled="True" LoadingText="Please Wait..." ParentControlID="cmbOccupation" PromptText="Select Sub Occup" ServiceMethod="GetSubOccup" ServicePath="ServiceTags.asmx" TargetControlID="cmbSubOccup"> </cc1:CascadingDropDown> </asp:Content> ...................................................... C# code behind ...................................................... protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { string strg = GridView1.SelectedDataKey["idcontact_info"].ToString(); int index = Convert.ToInt32(GridView1.SelectedDataKey["idcontact_info"].ToString()); //txtSearch.Text = GridView1.SelectedIndex.ToString(); // txtSearch.Text = GridView1.SelectedDataKey["idcontact_info"].ToString(); DSContactTableAdapters.contact_infoTableAdapter GetByIDAdapter = new DSContactTableAdapters.contact_infoTableAdapter(); DSContact.contact_infoDataTable ByID = GetByIDAdapter.GetDataByID(index); //DSSearch.contact_infoDataTable FirstName = FirstNameAdapter.GetDataByFirstNameList(prefixText); foreach (DataRow dr in ByID.Rows) { lbl.Text = dr["Work_Field"].ToString() + "....." + dr["Occupation"].ToString() + "....." + dr["sub_Occupation"].ToString(); cmbWorkField_CascadingDropDown.SelectedValue = dr["Work_Field"].ToString(); cmbOccupation_CascadingDropDown.SelectedValue = dr["Occupation"].ToString(); cmbSubOccup_CascadingDropDown.SelectedValue = dr["sub_Occupation"].ToString(); } } ....................................................... web Service ....................................................... [WebMethod] public CascadingDropDownNameValue[] GetWorkField(string knownCategoryValues, string category) { //dsCarsTableAdapters.CarsTableAdapter makeAdapter = new dsCarsTableAdapters.CarsTableAdapter(); //dsCars.CarsDataTable makes = makeAdapter.GetAllCars(); DSContactTableAdapters.tag_work_fieldTableAdapter GetWorkFieldAdapter = new DSContactTableAdapters.tag_work_fieldTableAdapter(); DSContact.tag_work_fieldDataTable WorkFields = GetWorkFieldAdapter.GetDataByGetWorkField(); List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in WorkFields) { string Work_Field = (string)dr["work_Field_name"]; int idtag_work_field = (int)dr["idtag_work_field"]; values.Add(new CascadingDropDownNameValue(Work_Field, idtag_work_field.ToString())); } return values.ToArray(); } [WebMethod] public CascadingDropDownNameValue[] GetOccup(string knownCategoryValues, string category) { StringDictionary kv = CascadingDropDown.ParseKnownCategoryValuesString(knownCategoryValues); int idtag_work_field; if (!kv.ContainsKey("WorkField") || !Int32.TryParse(kv["WorkField"], out idtag_work_field)) { return null; } //dsCarModelsTableAdapters.CarModelsTableAdapter modelAdapter = new dsCarModelsTableAdapters.CarModelsTableAdapter(); //dsCarModels.CarModelsDataTable models = modelAdapter.GetModelsByCarId(makeId); DSContactTableAdapters.tag_OccupTableAdapter GetOccupAdapter = new DSContactTableAdapters.tag_OccupTableAdapter(); DSContact.tag_OccupDataTable Occups = GetOccupAdapter.GetByOccup_ID(idtag_work_field); // List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in Occups) { values.Add(new CascadingDropDownNameValue((string)dr["Occup_Name"], dr["idtag_Occup"].ToString())); } return values.ToArray(); } [WebMethod] public CascadingDropDownNameValue[] GetSubOccup(string knownCategoryValues, string category) { StringDictionary kv = CascadingDropDown.ParseKnownCategoryValuesString(knownCategoryValues); int idtag_Occup; if (!kv.ContainsKey("Occup") || !Int32.TryParse(kv["Occup"], out idtag_Occup)) { return null; } //dsModelColorsTableAdapters.ModelColorsTableAdapter adapter = new dsModelColorsTableAdapters.ModelColorsTableAdapter(); //dsModelColors.ModelColorsDataTable colors = adapter.GetColorsByModelId(colorId); DSContactTableAdapters.tag_Sub_OccupTableAdapter GetSubOccupAdapter = new DSContactTableAdapters.tag_Sub_OccupTableAdapter(); DSContact.tag_Sub_OccupDataTable SubOccups = GetSubOccupAdapter.GetDataBy_Sub_Occup_ID(idtag_Occup); List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in SubOccups) { values.Add(new CascadingDropDownNameValue((string)dr["Sub_Occup_Name"], dr["idtag_Sub_Occup"].ToString())); } return values.ToArray(); }

    Read the article

  • C sockets, chat server and client, problem echoing back.

    - by wretrOvian
    Hi This is my chat server : #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define LISTEN_Q 20 #define MSG_SIZE 1024 struct userlist { int sockfd; struct sockaddr addr; struct userlist *next; }; int main(int argc, char *argv[]) { // declare. int listFD, newFD, fdmax, i, j, bytesrecvd; char msg[MSG_SIZE], ipv4[INET_ADDRSTRLEN]; struct addrinfo hints, *srvrAI; struct sockaddr_storage newAddr; struct userlist *users, *uptr, *utemp; socklen_t newAddrLen; fd_set master_set, read_set; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // create a user list users = (struct userlist *)malloc(sizeof(struct userlist)); users->sockfd = -1; //users->addr = NULL; users->next = NULL; // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo("localhost", argv[1], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((listFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // bind socket bind(listFD, srvrAI->ai_addr, srvrAI->ai_addrlen); // listen on socket if(listen(listFD, LISTEN_Q) == -1) { perror("* ERROR | listen()\n"); exit(1); } // add listfd to master_set FD_SET(listFD, &master_set); // initialize fdmax fdmax = listFD; while(1) { // equate read_set = master_set; // run select if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()\n"); exit(1); } // query all sockets for(i = 0; i <= fdmax; i++) { if(FD_ISSET(i, &read_set)) { // found active sockfd if(i == listFD) { // new connection // accept newAddrLen = sizeof newAddr; if((newFD = accept(listFD, (struct sockaddr *)&newAddr, &newAddrLen)) == -1) { perror("* ERROR | select()\n"); exit(1); } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&newAddr)->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } fprintf(stdout, "* Client Connected | %s\n", ipv4); // add to master list FD_SET(newFD, &master_set); // create new userlist component utemp = (struct userlist*)malloc(sizeof(struct userlist)); utemp->next = NULL; utemp->sockfd = newFD; utemp->addr = *((struct sockaddr *)&newAddr); // iterate to last node for(uptr = users; uptr->next != NULL; uptr = uptr->next) { } // add uptr->next = utemp; // update fdmax if(newFD > fdmax) fdmax = newFD; } else { // existing sockfd transmitting data // read if((bytesrecvd = recv(i, msg, MSG_SIZE, 0)) == -1) { perror("* ERROR | recv()\n"); exit(1); } msg[bytesrecvd] = '\0'; // find out who sent? for(uptr = users; uptr->next != NULL; uptr = uptr->next) { if(i == uptr->sockfd) break; } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&(uptr->addr))->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } // print fprintf(stdout, "%s\n", msg); // send to all for(j = 0; j <= fdmax; j++) { if(FD_ISSET(j, &master_set)) { if(send(j, msg, strlen(msg), 0) == -1) perror("* ERROR | send()"); } } } // handle read from client } // end select result handle } // end looping fds } // end while return 0; } This is my client: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define MSG_SIZE 1024 int main(int argc, char *argv[]) { // declare. int newFD, bytesrecvd, fdmax; char msg[MSG_SIZE]; fd_set master_set, read_set; struct addrinfo hints, *srvrAI; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo(argv[1], argv[2], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((newFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // connect to server if(connect(newFD, srvrAI->ai_addr, srvrAI->ai_addrlen) == -1) { perror("* ERROR | connect()\n"); exit(1); } // add to master, and add keyboard FD_SET(newFD, &master_set); FD_SET(STDIN_FILENO, &master_set); // initialize fdmax if(newFD > STDIN_FILENO) fdmax = newFD; else fdmax = STDIN_FILENO; while(1) { // equate read_set = master_set; if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()"); exit(1); } // check server if(FD_ISSET(newFD, &read_set)) { // read data if((bytesrecvd = recv(newFD, msg, MSG_SIZE, 0)) < 0 ) { perror("* ERROR | recv()"); exit(1); } msg[bytesrecvd] = '\0'; // print fprintf(stdout, "%s\n", msg); } // check keyboard if(FD_ISSET(STDIN_FILENO, &read_set)) { // read data from stdin if((bytesrecvd = read(STDIN_FILENO, msg, MSG_SIZE)) < 0) { perror("* ERROR | read()"); exit(1); } msg[bytesrecvd] = '\0'; // send if((send(newFD, msg, bytesrecvd, 0)) == -1) { perror("* ERROR | send()"); exit(1); } } } return 0; } The problem is with the part where the server recv()s data from an FD, then tries echoing back to all [send() ]; it just dies, w/o errors, and my client is left looping :(

    Read the article

  • SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008

    - by pinaldave
    Note: Please read the complete post before taking any actions. This blog post would discuss SHRINKFILE and TRUNCATE Log File. The script mentioned in the email received from reader contains the following questionable code: “Hi Pinal, If you could remember, I and my manager met you at TechEd in Bangalore. We just upgraded to SQL Server 2008. One of our jobs failed as it was using the following code. The error was: Msg 155, Level 15, State 1, Line 1 ‘TRUNCATE_ONLY’ is not a recognized BACKUP option. The code was: DBCC SHRINKFILE(TestDBLog, 1) BACKUP LOG TestDB WITH TRUNCATE_ONLY DBCC SHRINKFILE(TestDBLog, 1) GO I have modified that code to subsequent code and it works fine. But, are there other suggestions you have at the moment? USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO Configuration of our server and system is as follows: [Removed not relevant data]“ An email like this that suddenly pops out in early morning is alarming email. Because I am a dead, busy mind, so I had only one min to reply. I wrote down quickly the following note. (As I said, it was a single-minute email so it is not completely accurate). Here is that quick email shared with all of you. “Hi Mr. DBA [removed the name] Thanks for your email. I suggest you stop this practice. There are many issues included here, but I would list two major issues: 1) From the setting database to simple recovery, shrinking the file and once again setting in full recovery, you are in fact losing your valuable log data and will be not able to restore point in time. Not only that, you will also not able to use subsequent log files. 2) Shrinking file or database adds fragmentation. There are a lot of things you can do. First, start taking proper log backup using following command instead of truncating them and losing them frequently. BACKUP LOG [TestDb] TO  DISK = N'C:\Backup\TestDb.bak' GO Remove the code of SHRINKING the file. If you are taking proper log backups, your log file usually (again usually, special cases are excluded) do not grow very big. There are so many things to add here, but you can call me on my [phone number]. Before you call me, I suggest for accuracy you read Paul Randel‘s two posts here and here and Brent Ozar‘s Post here. Kind Regards, Pinal Dave” I guess this post is very much clear to you. Please leave your comments here. As mentioned, this is a very huge subject; I have just touched a tip of the ice-berg and have tried to point to authentic knowledge. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • This Week in Geek History: Gmail Goes Public, Deep Blue Wins at Chess, and the Birth of Thomas Edison

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the week in Geek History. This week we’re taking a peek at the public release of Gmail, the first time a computer won against a chess champion, and the birth of prolific inventor Thomas Edison. Gmail Goes Public It’s hard to believe that Gmail has only been around for seven years and that for the first three years of its life it was invite only. In 2007 Gmail dropped the invite only requirement (although they would hold onto the “beta” tag for another two years) and opened its doors for anyone to grab a username @gmail. For what seemed like an entire epoch in internet history Gmail had the slickest web-based email around with constant innovations and features rolling out from Gmail Labs. Only in the last year or so have major overhauls at competitors like Hotmail and Yahoo! Mail brought other services up to speed. Can’t stand reading a Week in Geek History entry without a random fact? Here you go: gmail.com was originally owned by the Garfield franchise and ran a service that delivered Garfield comics to your email inbox. No, we’re not kidding. Deep Blue Proves Itself a Chess Master Deep Blue was a super computer constructed by IBM with the sole purpose of winning chess matches. In 2011 with the all seeing eye of Google and the amazing computational abilities of engines like Wolfram Alpha we simply take powerful computers immersed in our daily lives for granted. The 1996 match against reigning world chest champion Garry Kasparov where in Deep Blue held its own, but ultimately lost, in a  4-2 match shook a lot of people up. What did it mean if something that was considered such an elegant and quintessentially human endeavor such as chess was so easy for a machine? A series of upgrades helped Deep Blue outright win a match against Kasparov in 1997 (seen in the photo above). After the win Deep Blue was retired and disassembled. Parts of Deep Blue are housed in the National Museum of History and the Computer History Museum. Birth of Thomas Edison Thomas Alva Edison was one of the most prolific inventors in history and holds an astounding 1,093 US Patents. He is responsible for outright inventing or greatly refining major innovations in the history of world culture including the phonograph, the movie camera, the carbon microphone used in nearly every telephone well into the 1980s, batteries for electric cars (a notion we’d take over a century to take seriously), voting machines, and of course his enormous contribution to electric distribution systems. Despite the role of scientist and inventor being largely unglamorous, Thomas Edison and his tumultuous relationship with fellow inventor Nikola Tesla have been fodder for everything from books, to comics, to movies, and video games. Other Notable Moments from This Week in Geek History Although we only shine the spotlight on three interesting facts a week in our Geek History column, that doesn’t mean we don’t have space to highlight a few more in passing. This week in Geek History: 1971 – Apollo 14 returns to Earth after third Lunar mission. 1974 – Birth of Robot Chicken creator Seth Green. 1986 – Death of Dune creator Frank Herbert. Goodnight Dune. 1997 – Simpsons becomes longest running animated show on television. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia. Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO] The Lingering Glow of Sunset over a Winter Landscape Wallpaper

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Presentation Foundation 4.5 Cookbook Review

    - by Ricardo Peres
    As promised, here’s my review of Windows Presentation Foundation 4.5 Cookbook, that Packt Publishing kindly made available to me. It is an introductory book, targeted at WPF newcomers or users with few experience, following the typical recipes or cookbook style. Like all Packt Publishing books on development, each recipe comes with sample code that is self-sufficient for understanding the concepts it tries to illustrate. It starts on chapter 1 by introducing the most important concepts, the XAML language itself, what can be declared in XAML and how to do it, what are dependency and attached properties as well as markup extensions and events, which should give readers a most required introduction to how WPF works and how to do basic stuff. It moves on to resources on chapter 2, which also makes since, since it’s such an important concept in WPF. Next, chapter 3, come the panels used for laying controls on the screen, all of the out of the box panels are described with typical use cases. Controls come next in chapter 4; the difference between elements and controls is introduced, as well as content controls, headered controls and items controls, and all standard controls are introduced. The book shows how to change the way they look by using templates. The next chapter, 5, talks about top level windows and the WPF application object: how to access startup arguments, how to set the main window, using standard dialogs and there’s even a sample on how to have a irregularly-shaped window. This is one of the most important concepts in WPF: data binding, which is the theme for the following chapter, 6. All common scenarios are introduced, the binding modes, directions, triggers, etc. It talks about the INotifyPropertyChanged interface and how to use it for notifying data binding subscribers of changes in data sources. Data templates and selectors are also covered, as are value converters and data triggers. Examples include master-detail and sorting, grouping and filtering collections and binding trees and grids. Last it covers validation rules and error templates. Chapter 7 talks about the current trend in WPF development, the Model View View-Model (MVVM) framework. This is a well known pattern for connecting things interface to actions, and it is explained competently. A typical implementation is presented which also presents the command pattern used throughout WPF. A complete application using MVVM is presented from start to finish, including typical features such as undo. Style and layout is covered on chapter 8. Why/how to use styles, applying them automatically,  using the many types of triggers to change styles automatically, using Expression Blend behaviors and templates are all covered. Next chapter, 9, is about graphics and animations programming. It explains how to create shapes, transform common UI elements, apply special effects and perform simple animations. The following chapter, 10, is about creating custom controls, either by deriving from UserControl or from an existing control or framework element class, applying custom templates for changing the way the control looks. One useful example is a custom layout panel that arranges its children along a circumference. The final chapter, 11, is about multi-threading programming and how one can integrate it with WPF. Includes how to invoke methods and properties on WPF classes from threads other than the main UI, using background tasks and timers and even using the new C# 5.0 asynchronous operations. It’s an interesting book, like I said, mostly for newcomers. It provides a competent introduction to WPF, with examples that cover the most common scenarios and also give directions to more complex ones. I recommend it to everyone wishing to learn WPF.

    Read the article

  • Oracle SQL Developer Data Modeler: What Tables Aren’t In At Least One SubView?

    - by thatjeffsmith
    Organizing your data model makes the information easier to consume. One of the organizational tools provided by Oracle SQL Developer Data Modeler is the ‘SubView.’ In a nutshell, a SubView is a subset of your model. The Challenge: I’ve just created a model which represents my entire ____________ application. We’ll call it ‘residential lending.’ Instead of having all 100+ tables in a single model diagram, I want to break out the tables by module, e.g. appraisals, credit reports, work histories, customers, etc. I’ve spent several hours breaking out the tables to one or more SubViews, but I think i may have missed a few. Is there an easy way to see what tables aren’t in at least ONE subview? The Answer Yes, mostly. The mostly comes about from the way I’m going to accomplish this task. It involves querying the SQL Developer Data Modeler Reporting Schema. So if you don’t have the Reporting Schema setup, you’ll need to do so. Got it? Good, let’s proceed. Before you start querying your Reporting Schema, you might need a data model for the actual reporting schema…meta-meta data! You could reverse engineer the data modeler reporting schema to a new data model, or you could just reference the PDFs in \datamodeler\reports\Reporting Schema diagrams directory. Here’s a hint, it’s THIS one The Query Well, it’s actually going to be at least 2 queries. We need to get a list of distinct designs stored in your repository. For giggles, I’m going to get a listing including each version of the model. So I can query based on design and version, or in this case, timestamp of when it was added to the repository. We’ll get that from the DMRS_DESIGNS table: SELECT DISTINCT design_name, design_ovid, date_published FROM DMRS_designs Then I’m going to feed the design_ovid, down to a subquery for my child report. select name, count(distinct diagram_id) from DMRS_DIAGRAM_ELEMENTS where design_ovid = :dESIGN_OVID and type = 'Table' group by name having count(distinct diagram_id) < 2 order by count(distinct diagram_id) desc Each diagram element has an entry in this table, so I need to filter on type=’Table.’ Each design has AT LEAST one diagram, the master diagram. So any relational table in this table, only having one listing means it’s not in any SubViews. If you have overloaded object names, which is VERY possible, you’ll want to do the report off of ‘OBJECT_ID’, but then you’ll need to correlate that to the NAME, as I doubt you’re so intimate with your designs that you recognize the GUIDs So I’m going to cheat and just stick with names, but I think you get the gist. My Model Of my almost 90 tables, how many of those have I not added to at least one SubView? Now let’s run my report! Voila! My ‘BEER2′ table isn’t in any SubView! It says ’1′ because the main model diagram counts as a view. So if the count came back as ’2′, that would mean the table was in the main model diagram and in 1 SubView diagram. And I know what you’re thinking, what kind of residential lending program would have a table called ‘BEER2?’ Let’s just say, that my business model has some kinks to work out!

    Read the article

  • SQL SERVER – Error: Fix – Msg 208 – Invalid object name ‘dbo.backupset’ – Invalid object name ‘dbo.backupfile’

    - by pinaldave
    Just a day before I got a very interesting email. Here is the email (modified a bit to make it relevant to this blog post). “Pinal, We are facing a very strange issue. One of our query  related to backup files and backup set has stopped working suddenly in SSMS. It works fine in application where we have and in the stored procedure but when we have it in our SSMS it gives following error. Msg 208, Level 16, State 1, Line 1 Invalid object name ‘dbo.backupfile’. Here are our queries which we are trying to execute. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; This query gives us details related to backupset and backup files when the backup was taken.” When I receive this kind of email, usually I have no answers directly. The claim that it works in stored procedure and in application but not in SSMS gives me no real data. I have requested him to very first check following two things: If he is connected to correct server? His answer was yes. If he has enough permissions? His answer was he was logged in as an admin. This means there was something more to it and I requested him to send me a screenshot of the his SSMS. He promptly sends that to me and as soon as I receive the screen shot I knew what was going on. Before I say anything take a look at the screenshot yourself and see if you can figure out why his queries are not working in SSMS. Just to make your life a bit easy, I have already given a hint in the image. The answer is very simple, the context of the database is master database. To execute above two queries the context of the database has to be msdb. Tables backupset and backupfile belong to the database msdb only. Here are two workaround or solution to above problem: 1) Change context to MSDB Above two queries when they will run as following they will not error out and will give the accurate desired result. USE msdb GO SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; 2) Prefix the query with msdb There are cases above script used in stored procedure or part of big query, it is not possible to change the context of the whole query to any specific database. Use three part naming convention and prefix them with msdb. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM msdb.dbo.backupset; SELECT logical_name, backup_size, file_type FROM msdb.dbo.backupfile; Very simple solution but sometime keeps people wondering for an answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • ASP.NET Server-side comments

    - by nmarun
    I believe a good number of you know about Server-side commenting. This blog is just like a revival to refresh your memories. When you write comments in your .aspx/.ascx files, people usually write them as: 1: <!-- This is a comment. --> To show that it actually makes a difference for using the server-side commenting technique, I’ve started a web application project and my default.aspx page looks like this: 1: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ServerSideComment._Default" %> 2: <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 3: </asp:Content> 4: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 5: <h2> 6: <!-- This is a comment --> 7: Welcome to ASP.NET! 8: </h2> 9: <p> 10: To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. 11: </p> 12: <p> 13: You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" 14: title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. 15: </p> 16: </asp:Content> See the comment in line 6 and when I run the app, I can do a view source on the browser which shows up as: 1: <h2> 2: <!-- This is a comment --> 3: Welcome to ASP.NET! 4: </h2> Using Fiddler shows the page size as: Let’s change the comment style and use server-side commenting technique. 1: <h2> 2: <%-- This is a comment --%> 3: Welcome to ASP.NET! 4: </h2> Upon rendering, the view source looks like: 1: <h2> 2: 3: Welcome to ASP.NET! 4: </h2> Fiddler now shows the page size as: The difference is that client-side comments are ignored by the browser, but they are still sent down the pipe. With server-side comments, the compiler ignores everything inside this block. Visual Studio’s Text Editor toolbar also puts comments as server-side ones. If you want to give it a shot, go to your design page and press Ctrl+K, Ctrl+C on some selected text and you’ll see it commented in the server-side commenting style.

    Read the article

  • How to read oom-killer syslog messages?

    - by Grant
    I have a Ubuntu 12.04 server which sometimes dies completely - no SSH, no ping, nothing until it is physically rebooted. After the reboot, I see in syslog that the oom-killer killed, well, pretty much everything. There's a lot of detailed memory usage information in them. How do I read these logs to see what caused the OOM issue? The server has far more memory than it needs, so it shouldn't be running out of memory. Oct 25 07:28:04 nldedip4k031 kernel: [87946.529511] oom_kill_process: 9 callbacks suppressed Oct 25 07:28:04 nldedip4k031 kernel: [87946.529514] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529516] irqbalance cpuset=/ mems_allowed=0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529518] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu Oct 25 07:28:04 nldedip4k031 kernel: [87946.529519] Call Trace: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529525] [] dump_header.isra.6+0x85/0xc0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529528] [] oom_kill_process+0x5c/0x80 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529530] [] out_of_memory+0xc5/0x1c0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529532] [] __alloc_pages_nodemask+0x72c/0x740 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529535] [] __get_free_pages+0x1c/0x30 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529537] [] get_zeroed_page+0x12/0x20 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529541] [] fill_read_buffer.isra.8+0xaa/0xd0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529543] [] sysfs_read_file+0x7d/0x90 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529546] [] vfs_read+0x8c/0x160 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529548] [] ? fill_read_buffer.isra.8+0xd0/0xd0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529550] [] sys_read+0x3d/0x70 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529554] [] sysenter_do_call+0x12/0x28 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529555] Mem-Info: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529556] DMA per-cpu: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529557] CPU 0: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529558] CPU 1: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529560] CPU 2: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529561] CPU 3: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529562] CPU 4: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529563] CPU 5: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529564] CPU 6: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529565] CPU 7: hi: 0, btch: 1 usd: 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529566] Normal per-cpu: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529567] CPU 0: hi: 186, btch: 31 usd: 179 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529568] CPU 1: hi: 186, btch: 31 usd: 182 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529569] CPU 2: hi: 186, btch: 31 usd: 132 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529570] CPU 3: hi: 186, btch: 31 usd: 175 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529571] CPU 4: hi: 186, btch: 31 usd: 91 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529572] CPU 5: hi: 186, btch: 31 usd: 173 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529573] CPU 6: hi: 186, btch: 31 usd: 159 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529574] CPU 7: hi: 186, btch: 31 usd: 164 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529575] HighMem per-cpu: Oct 25 07:28:04 nldedip4k031 kernel: [87946.529576] CPU 0: hi: 186, btch: 31 usd: 165 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529577] CPU 1: hi: 186, btch: 31 usd: 183 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529578] CPU 2: hi: 186, btch: 31 usd: 185 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529579] CPU 3: hi: 186, btch: 31 usd: 138 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529580] CPU 4: hi: 186, btch: 31 usd: 155 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529581] CPU 5: hi: 186, btch: 31 usd: 104 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529582] CPU 6: hi: 186, btch: 31 usd: 133 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529583] CPU 7: hi: 186, btch: 31 usd: 170 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529586] active_anon:5523 inactive_anon:354 isolated_anon:0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529586] active_file:2815 inactive_file:6849119 isolated_file:0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529587] unevictable:0 dirty:449 writeback:10 unstable:0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529587] free:1304125 slab_reclaimable:104672 slab_unreclaimable:3419 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529588] mapped:2661 shmem:138 pagetables:313 bounce:0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529591] DMA free:4252kB min:780kB low:972kB high:1168kB active_anon:0kB inactive_anon:0kB active_file:4kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15756kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:11564kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1 all_unreclaimable? yes Oct 25 07:28:04 nldedip4k031 kernel: [87946.529594] lowmem_reserve[]: 0 869 32460 32460 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529599] Normal free:44052kB min:44216kB low:55268kB high:66324kB active_anon:0kB inactive_anon:0kB active_file:616kB inactive_file:568kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:890008kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:407124kB slab_unreclaimable:13672kB kernel_stack:992kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:2083 all_unreclaimable? yes Oct 25 07:28:04 nldedip4k031 kernel: [87946.529602] lowmem_reserve[]: 0 0 252733 252733 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529606] HighMem free:5168196kB min:512kB low:402312kB high:804112kB active_anon:22092kB inactive_anon:1416kB active_file:10640kB inactive_file:27395920kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:32349872kB mlocked:0kB dirty:1796kB writeback:40kB mapped:10640kB shmem:552kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1252kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Oct 25 07:28:04 nldedip4k031 kernel: [87946.529609] lowmem_reserve[]: 0 0 0 0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529611] DMA: 6*4kB 6*8kB 6*16kB 5*32kB 5*64kB 4*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 4232kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.529616] Normal: 297*4kB 180*8kB 119*16kB 73*32kB 67*64kB 47*128kB 35*256kB 13*512kB 5*1024kB 1*2048kB 1*4096kB = 44052kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.529622] HighMem: 1*4kB 6*8kB 27*16kB 11*32kB 2*64kB 1*128kB 0*256kB 0*512kB 4*1024kB 1*2048kB 1260*4096kB = 5168196kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.529627] 6852076 total pagecache pages Oct 25 07:28:04 nldedip4k031 kernel: [87946.529628] 0 pages in swap cache Oct 25 07:28:04 nldedip4k031 kernel: [87946.529629] Swap cache stats: add 0, delete 0, find 0/0 Oct 25 07:28:04 nldedip4k031 kernel: [87946.529630] Free swap = 3998716kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.529631] Total swap = 3998716kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.571914] 8437743 pages RAM Oct 25 07:28:04 nldedip4k031 kernel: [87946.571916] 8209409 pages HighMem Oct 25 07:28:04 nldedip4k031 kernel: [87946.571917] 159556 pages reserved Oct 25 07:28:04 nldedip4k031 kernel: [87946.571917] 6862034 pages shared Oct 25 07:28:04 nldedip4k031 kernel: [87946.571918] 123540 pages non-shared Oct 25 07:28:04 nldedip4k031 kernel: [87946.571919] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Oct 25 07:28:04 nldedip4k031 kernel: [87946.571927] [ 421] 0 421 709 152 3 0 0 upstart-udev-br Oct 25 07:28:04 nldedip4k031 kernel: [87946.571929] [ 429] 0 429 773 326 5 -17 -1000 udevd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571931] [ 567] 0 567 772 224 4 -17 -1000 udevd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571932] [ 568] 0 568 772 231 7 -17 -1000 udevd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571934] [ 764] 0 764 712 103 1 0 0 upstart-socket- Oct 25 07:28:04 nldedip4k031 kernel: [87946.571936] [ 772] 103 772 815 164 5 0 0 dbus-daemon Oct 25 07:28:04 nldedip4k031 kernel: [87946.571938] [ 785] 0 785 1671 600 1 -17 -1000 sshd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571940] [ 809] 101 809 7766 380 1 0 0 rsyslogd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571942] [ 869] 0 869 1158 213 3 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571943] [ 873] 0 873 1158 214 6 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571945] [ 911] 0 911 1158 215 3 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571947] [ 912] 0 912 1158 214 2 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571949] [ 914] 0 914 1158 213 1 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571950] [ 916] 0 916 618 86 1 0 0 atd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571952] [ 917] 0 917 655 226 3 0 0 cron Oct 25 07:28:04 nldedip4k031 kernel: [87946.571954] [ 948] 0 948 902 159 3 0 0 irqbalance Oct 25 07:28:04 nldedip4k031 kernel: [87946.571956] [ 993] 0 993 1145 363 3 0 0 master Oct 25 07:28:04 nldedip4k031 kernel: [87946.571957] [ 1002] 104 1002 1162 333 1 0 0 qmgr Oct 25 07:28:04 nldedip4k031 kernel: [87946.571959] [ 1016] 0 1016 730 149 2 0 0 mdadm Oct 25 07:28:04 nldedip4k031 kernel: [87946.571961] [ 1057] 0 1057 6066 2160 3 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571963] [ 1086] 0 1086 1158 213 3 0 0 getty Oct 25 07:28:04 nldedip4k031 kernel: [87946.571965] [ 1088] 33 1088 6191 1517 0 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571967] [ 1089] 33 1089 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571969] [ 1090] 33 1090 6175 1451 3 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571971] [ 1091] 33 1091 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571972] [ 1092] 33 1092 6191 1451 0 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571974] [ 1109] 33 1109 6191 1517 0 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571976] [ 1151] 33 1151 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:04 nldedip4k031 kernel: [87946.571978] [ 1201] 104 1201 1803 652 1 0 0 tlsmgr Oct 25 07:28:04 nldedip4k031 kernel: [87946.571980] [ 2475] 0 2475 2435 812 0 0 0 sshd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571982] [ 2494] 0 2494 1745 839 1 0 0 bash Oct 25 07:28:04 nldedip4k031 kernel: [87946.571984] [ 2573] 0 2573 3394 1689 0 0 0 sshd Oct 25 07:28:04 nldedip4k031 kernel: [87946.571986] [ 2589] 0 2589 5014 457 3 0 0 rsync Oct 25 07:28:04 nldedip4k031 kernel: [87946.571988] [ 2590] 0 2590 7970 522 1 0 0 rsync Oct 25 07:28:04 nldedip4k031 kernel: [87946.571990] [ 2652] 104 2652 1150 326 5 0 0 pickup Oct 25 07:28:04 nldedip4k031 kernel: [87946.571992] Out of memory: Kill process 421 (upstart-udev-br) score 1 or sacrifice child Oct 25 07:28:04 nldedip4k031 kernel: [87946.572407] Killed process 421 (upstart-udev-br) total-vm:2836kB, anon-rss:156kB, file-rss:452kB Oct 25 07:28:04 nldedip4k031 kernel: [87946.573107] init: upstart-udev-bridge main process (421) killed by KILL signal Oct 25 07:28:04 nldedip4k031 kernel: [87946.573126] init: upstart-udev-bridge main process ended, respawning Oct 25 07:28:34 nldedip4k031 kernel: [87976.461570] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461573] irqbalance cpuset=/ mems_allowed=0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461576] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu Oct 25 07:28:34 nldedip4k031 kernel: [87976.461578] Call Trace: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461585] [] dump_header.isra.6+0x85/0xc0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461588] [] oom_kill_process+0x5c/0x80 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461591] [] out_of_memory+0xc5/0x1c0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461595] [] __alloc_pages_nodemask+0x72c/0x740 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461599] [] __get_free_pages+0x1c/0x30 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461602] [] get_zeroed_page+0x12/0x20 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461606] [] fill_read_buffer.isra.8+0xaa/0xd0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461609] [] sysfs_read_file+0x7d/0x90 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461613] [] vfs_read+0x8c/0x160 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461616] [] ? fill_read_buffer.isra.8+0xd0/0xd0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461619] [] sys_read+0x3d/0x70 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461624] [] sysenter_do_call+0x12/0x28 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461626] Mem-Info: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461628] DMA per-cpu: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461629] CPU 0: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461631] CPU 1: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461633] CPU 2: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461634] CPU 3: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461636] CPU 4: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461638] CPU 5: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461639] CPU 6: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461641] CPU 7: hi: 0, btch: 1 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461642] Normal per-cpu: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461644] CPU 0: hi: 186, btch: 31 usd: 61 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461646] CPU 1: hi: 186, btch: 31 usd: 49 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461647] CPU 2: hi: 186, btch: 31 usd: 8 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461649] CPU 3: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461651] CPU 4: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461652] CPU 5: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461654] CPU 6: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461656] CPU 7: hi: 186, btch: 31 usd: 30 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461657] HighMem per-cpu: Oct 25 07:28:34 nldedip4k031 kernel: [87976.461658] CPU 0: hi: 186, btch: 31 usd: 4 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461660] CPU 1: hi: 186, btch: 31 usd: 204 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461662] CPU 2: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461663] CPU 3: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461665] CPU 4: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461667] CPU 5: hi: 186, btch: 31 usd: 31 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461668] CPU 6: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461670] CPU 7: hi: 186, btch: 31 usd: 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461674] active_anon:5441 inactive_anon:412 isolated_anon:0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461674] active_file:2668 inactive_file:6922842 isolated_file:0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461675] unevictable:0 dirty:836 writeback:0 unstable:0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461676] free:1231664 slab_reclaimable:105781 slab_unreclaimable:3399 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461677] mapped:2649 shmem:138 pagetables:313 bounce:0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461682] DMA free:4248kB min:780kB low:972kB high:1168kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:4kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15756kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:11560kB slab_unreclaimable:4kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:5687 all_unreclaimable? yes Oct 25 07:28:34 nldedip4k031 kernel: [87976.461686] lowmem_reserve[]: 0 869 32460 32460 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461693] Normal free:44184kB min:44216kB low:55268kB high:66324kB active_anon:0kB inactive_anon:0kB active_file:20kB inactive_file:1096kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:890008kB mlocked:0kB dirty:4kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:411564kB slab_unreclaimable:13592kB kernel_stack:992kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:1816 all_unreclaimable? yes Oct 25 07:28:34 nldedip4k031 kernel: [87976.461697] lowmem_reserve[]: 0 0 252733 252733 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461703] HighMem free:4878224kB min:512kB low:402312kB high:804112kB active_anon:21764kB inactive_anon:1648kB active_file:10652kB inactive_file:27690268kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:32349872kB mlocked:0kB dirty:3340kB writeback:0kB mapped:10592kB shmem:552kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:1252kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Oct 25 07:28:34 nldedip4k031 kernel: [87976.461708] lowmem_reserve[]: 0 0 0 0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461711] DMA: 8*4kB 7*8kB 6*16kB 5*32kB 5*64kB 4*128kB 2*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 4248kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.461719] Normal: 272*4kB 178*8kB 76*16kB 52*32kB 42*64kB 36*128kB 23*256kB 20*512kB 7*1024kB 2*2048kB 1*4096kB = 44176kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.461727] HighMem: 1*4kB 45*8kB 31*16kB 24*32kB 5*64kB 3*128kB 1*256kB 2*512kB 4*1024kB 2*2048kB 1188*4096kB = 4877852kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.461736] 6925679 total pagecache pages Oct 25 07:28:34 nldedip4k031 kernel: [87976.461737] 0 pages in swap cache Oct 25 07:28:34 nldedip4k031 kernel: [87976.461739] Swap cache stats: add 0, delete 0, find 0/0 Oct 25 07:28:34 nldedip4k031 kernel: [87976.461740] Free swap = 3998716kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.461741] Total swap = 3998716kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.524951] 8437743 pages RAM Oct 25 07:28:34 nldedip4k031 kernel: [87976.524953] 8209409 pages HighMem Oct 25 07:28:34 nldedip4k031 kernel: [87976.524954] 159556 pages reserved Oct 25 07:28:34 nldedip4k031 kernel: [87976.524955] 6936141 pages shared Oct 25 07:28:34 nldedip4k031 kernel: [87976.524956] 124602 pages non-shared Oct 25 07:28:34 nldedip4k031 kernel: [87976.524957] [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Oct 25 07:28:34 nldedip4k031 kernel: [87976.524966] [ 429] 0 429 773 326 5 -17 -1000 udevd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524968] [ 567] 0 567 772 224 4 -17 -1000 udevd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524971] [ 568] 0 568 772 231 7 -17 -1000 udevd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524973] [ 764] 0 764 712 103 3 0 0 upstart-socket- Oct 25 07:28:34 nldedip4k031 kernel: [87976.524976] [ 772] 103 772 815 164 2 0 0 dbus-daemon Oct 25 07:28:34 nldedip4k031 kernel: [87976.524979] [ 785] 0 785 1671 600 1 -17 -1000 sshd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524981] [ 809] 101 809 7766 380 1 0 0 rsyslogd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524983] [ 869] 0 869 1158 213 3 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524986] [ 873] 0 873 1158 214 6 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524988] [ 911] 0 911 1158 215 3 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524990] [ 912] 0 912 1158 214 2 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524992] [ 914] 0 914 1158 213 1 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.524995] [ 916] 0 916 618 86 1 0 0 atd Oct 25 07:28:34 nldedip4k031 kernel: [87976.524997] [ 917] 0 917 655 226 3 0 0 cron Oct 25 07:28:34 nldedip4k031 kernel: [87976.524999] [ 948] 0 948 902 159 5 0 0 irqbalance Oct 25 07:28:34 nldedip4k031 kernel: [87976.525002] [ 993] 0 993 1145 363 3 0 0 master Oct 25 07:28:34 nldedip4k031 kernel: [87976.525004] [ 1002] 104 1002 1162 333 1 0 0 qmgr Oct 25 07:28:34 nldedip4k031 kernel: [87976.525007] [ 1016] 0 1016 730 149 2 0 0 mdadm Oct 25 07:28:34 nldedip4k031 kernel: [87976.525009] [ 1057] 0 1057 6066 2160 3 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525012] [ 1086] 0 1086 1158 213 3 0 0 getty Oct 25 07:28:34 nldedip4k031 kernel: [87976.525014] [ 1088] 33 1088 6191 1517 0 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525017] [ 1089] 33 1089 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525019] [ 1090] 33 1090 6175 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525021] [ 1091] 33 1091 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525024] [ 1092] 33 1092 6191 1451 0 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525026] [ 1109] 33 1109 6191 1517 0 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525029] [ 1151] 33 1151 6191 1451 1 0 0 /usr/sbin/apach Oct 25 07:28:34 nldedip4k031 kernel: [87976.525031] [ 1201] 104 1201 1803 652 1 0 0 tlsmgr Oct 25 07:28:34 nldedip4k031 kernel: [87976.525033] [ 2475] 0 2475 2435 812 0 0 0 sshd Oct 25 07:28:34 nldedip4k031 kernel: [87976.525036] [ 2494] 0 2494 1745 839 1 0 0 bash Oct 25 07:28:34 nldedip4k031 kernel: [87976.525038] [ 2573] 0 2573 3394 1689 3 0 0 sshd Oct 25 07:28:34 nldedip4k031 kernel: [87976.525040] [ 2589] 0 2589 5014 457 3 0 0 rsync Oct 25 07:28:34 nldedip4k031 kernel: [87976.525043] [ 2590] 0 2590 7970 522 1 0 0 rsync Oct 25 07:28:34 nldedip4k031 kernel: [87976.525045] [ 2652] 104 2652 1150 326 5 0 0 pickup Oct 25 07:28:34 nldedip4k031 kernel: [87976.525048] [ 2847] 0 2847 709 89 0 0 0 upstart-udev-br Oct 25 07:28:34 nldedip4k031 kernel: [87976.525050] Out of memory: Kill process 764 (upstart-socket-) score 1 or sacrifice child Oct 25 07:28:34 nldedip4k031 kernel: [87976.525484] Killed process 764 (upstart-socket-) total-vm:2848kB, anon-rss:204kB, file-rss:208kB Oct 25 07:28:34 nldedip4k031 kernel: [87976.526161] init: upstart-socket-bridge main process (764) killed by KILL signal Oct 25 07:28:34 nldedip4k031 kernel: [87976.526180] init: upstart-socket-bridge main process ended, respawning Oct 25 07:28:44 nldedip4k031 kernel: [87986.439671] irqbalance invoked oom-killer: gfp_mask=0x80d0, order=0, oom_adj=0, oom_score_adj=0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439674] irqbalance cpuset=/ mems_allowed=0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439676] Pid: 948, comm: irqbalance Not tainted 3.2.0-55-generic-pae #85-Ubuntu Oct 25 07:28:44 nldedip4k031 kernel: [87986.439678] Call Trace: Oct 25 07:28:44 nldedip4k031 kernel: [87986.439684] [] dump_header.isra.6+0x85/0xc0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439686] [] oom_kill_process+0x5c/0x80 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439688] [] out_of_memory+0xc5/0x1c0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439691] [] __alloc_pages_nodemask+0x72c/0x740 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439694] [] __get_free_pages+0x1c/0x30 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439696] [] get_zeroed_page+0x12/0x20 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439699] [] fill_read_buffer.isra.8+0xaa/0xd0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439702] [] sysfs_read_file+0x7d/0x90 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439704] [] vfs_read+0x8c/0x160 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439707] [] ? fill_read_buffer.isra.8+0xd0/0xd0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439709] [] sys_read+0x3d/0x70 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439712] [] sysenter_do_call+0x12/0x28 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439714] Mem-Info: Oct 25 07:28:44 nldedip4k031 kernel: [87986.439714] DMA per-cpu: Oct 25 07:28:44 nldedip4k031 kernel: [87986.439716] CPU 0: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439717] CPU 1: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439718] CPU 2: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439719] CPU 3: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439720] CPU 4: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439721] CPU 5: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439722] CPU 6: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439723] CPU 7: hi: 0, btch: 1 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439724] Normal per-cpu: Oct 25 07:28:44 nldedip4k031 kernel: [87986.439725] CPU 0: hi: 186, btch: 31 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439726] CPU 1: hi: 186, btch: 31 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439727] CPU 2: hi: 186, btch: 31 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439728] CPU 3: hi: 186, btch: 31 usd: 0 Oct 25 07:28:44 nldedip4k031 kernel: [87986.439729] CPU 4: hi: 186, btch: 31 usd: 0 Oct 25 07:33:48 nldedip4k031 kernel: imklog 5.8.6, log source = /proc/kmsg started. Oct 25 07:33:48 nldedip4k031 rsyslogd: [origin software="rsyslogd" swVersion="5.8.6" x-pid="2880" x-info="http://www.rsyslog.com"] start Oct 25 07:33:48 nldedip4k031 rsyslogd: rsyslogd's groupid changed to 103 Oct 25 07:33:48 nldedip4k031 rsyslogd: rsyslogd's userid changed to 101 Oct 25 07:33:48 nldedip4k031 rsyslogd-2039: Could not open output pipe '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ]

    Read the article

  • Localization with ASP.NET MVC ModelMetadata

    - by kazimanzurrashid
    When using the DisplayFor/EditorFor there has been built-in support in ASP.NET MVC to show localized validation messages, but no support to show the associate label in localized text, unless you are using the .NET 4.0 with Mvc Future. Lets a say you are creating a create form for Product where you have support both English and German like the following. English German I have recently added few helpers for localization in the MvcExtensions, lets see how we can use it to localize the form. As mentioned in the past that I am not a big fan when it comes to decorate class with attributes which is the recommended way in ASP.NET MVC. Instead, we will use the fluent configuration (Similar to FluentNHibernate or EF CodeFirst) of MvcExtensions to configure our View Models. For example for the above we will using: public class ProductEditModelConfiguration : ModelMetadataConfiguration<ProductEditModel> { public ProductEditModelConfiguration() { Configure(model => model.Id).Hide(); Configure(model => model.Name).DisplayName(() => LocalizedTexts.Name) .Required(() => LocalizedTexts.NameCannotBeBlank) .MaximumLength(64, () => LocalizedTexts.NameCannotBeMoreThanSixtyFourCharacters); Configure(model => model.Category).DisplayName(() => LocalizedTexts.Category) .Required(() => LocalizedTexts.CategoryMustBeSelected) .AsDropDownList("categories", () => LocalizedTexts.SelectCategory); Configure(model => model.Supplier).DisplayName(() => LocalizedTexts.Supplier) .Required(() => LocalizedTexts.SupplierMustBeSelected) .AsListBox("suppliers"); Configure(model => model.Price).DisplayName(() => LocalizedTexts.Price) .FormatAsCurrency() .Required(() => LocalizedTexts.PriceCannotBeBlank) .Range(10.00m, 1000.00m, () => LocalizedTexts.PriceMustBeBetweenTenToThousand); } } As you can we are using Func<string> to set the localized text, this is just an overload with the regular string method. There are few more methods in the ModelMetadata which accepts this Func<string> where localization can applied like Description, Watermark, ShortDisplayName etc. The LocalizedTexts is just a regular resource, we have both English and German:   Now lets see the view markup: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<Demo.Web.ProductEditModel>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> <%= LocalizedTexts.Create %> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2><%= LocalizedTexts.Create %></h2> <%= Html.ValidationSummary(false, LocalizedTexts.CreateValidationSummary)%> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <%= Html.EditorForModel() %> <p> <input type="submit" value="<%= LocalizedTexts.Create %>" /> </p> </fieldset> <% } %> <div> <%= Html.ActionLink(LocalizedTexts.BackToList, "Index")%> </div> </asp:Content> As we can see that we are using the same LocalizedTexts for the other parts of the view which is not included in the ModelMetadata like the Page title, button text etc. We are also using EditorForModel instead of EditorFor for individual field and both are supported. One of the added benefit of the fluent syntax based configuration is that we will get full compile type checking for our resource as we are not depending upon the string based resource name like the ASP.NET MVC. You will find the complete localized CRUD example in the MvcExtensions sample folder. That’s it for today.

    Read the article

  • Markus Zirn, "Big Data with CEP and SOA" @ SOA, Cloud &amp; Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 Early-Bird Registration is Now Open with Special Pricing! Register before July 1, 2012 to qualify for discounts. Visit the Registration page for details. The International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Big Data with CEP and SOA - September 25, 2012 - 14:15 Speaker: Markus Zirn, Oracle and Baz Kuthi, Avocent The "Big Data" trend is driving new kinds of IT projects that process machine-generated data. Such projects store and mine using Hadoop/ Map Reduce, but they also analyze streaming data via event-driven patterns, which can be called "Fast Data" complementary to "Big Data". This session highlights how "Big Data" and "Fast Data" design patterns can be combined with SOA design principles into modern, event-driven architectures. We will describe specific architectures that combines CEP, Distributed Caching, Event-driven Network, SOA Composites, Application Development Framework, as well as Hadoop. Architecture patterns include pre-processing and filtering event streams as close as possible to the event source, in memory master data for event pattern matching, event-driven user interfaces as well as distributed event processing. Focus is on how "Fast Data" requirements are elegantly integrated into a traditional SOA architecture. Markus Zirn is Vice President of Product Management covering Oracle SOA Suite, SOA Governance, Application Integration Architecture, BPM, BPM Solutions, Complex Event Processing and UPK, an end user learning solution. He is the author of “The BPEL Cookbook” (rated best book on Services Oriented Architecture in 2007) as well as “Fusion Middleware Patterns”. Previously, he was a management consultant with Booz Allen & Hamilton’s High Tech practice in Duesseldorf as well as San Francisco and Vice President of Product Marketing at QUIQ. Mr. Zirn holds a Masters of Electrical Engineering from the University of Karlsruhe and is an alumnus of the Tripartite program, a joint European degree from the University of Karlsruhe, Germany, the University of Southampton, UK, and ESIEE, France. KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Markus Zirn,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • ODI 11g – Insight to the SDK

    - by David Allan
    This post is a useful index into the ODI SDK that cross references the type names from the user interface with the SDK class and also the finder for how to get a handle on the object or objects. The volume of content in the SDK might seem a little ominous, there is a lot there, but there is a general pattern to the SDK that I will describe here. Also I will illustrate some basic CRUD operations so you can see how the SDK usage pattern works. The examples are written in groovy, you can simply run from the groovy console in ODI 11.1.1.6. Entry to the Platform   Object Finder SDK odiInstance odiInstance (groovy variable for console) OdiInstance Topology Objects Object Finder SDK Technology IOdiTechnologyFinder OdiTechnology Context IOdiContextFinder OdiContext Logical Schema IOdiLogicalSchemaFinder OdiLogicalSchema Data Server IOdiDataServerFinder OdiDataServer Physical Schema IOdiPhysicalSchemaFinder OdiPhysicalSchema Logical Schema to Physical Mapping IOdiContextualSchemaMappingFinder OdiContextualSchemaMapping Logical Agent IOdiLogicalAgentFinder OdiLogicalAgent Physical Agent IOdiPhysicalAgentFinder OdiPhysicalAgent Logical Agent to Physical Mapping IOdiContextualAgentMappingFinder OdiContextualAgentMapping Master Repository IOdiMasterRepositoryInfoFinder OdiMasterRepositoryInfo Work Repository IOdiWorkRepositoryInfoFinder OdiWorkRepositoryInfo Project Objects Object Finder SDK Project IOdiProjectFinder OdiProject Folder IOdiFolderFinder OdiFolder Interface IOdiInterfaceFinder OdiInterface Package IOdiPackageFinder OdiPackage Procedure IOdiUserProcedureFinder OdiUserProcedure User Function IOdiUserFunctionFinder OdiUserFunction Variable IOdiVariableFinder OdiVariable Sequence IOdiSequenceFinder OdiSequence KM IOdiKMFinder OdiKM Load Plans and Scenarios   Object Finder SDK Load Plan IOdiLoadPlanFinder OdiLoadPlan Load Plan and Scenario Folder IOdiScenarioFolderFinder OdiScenarioFolder Model Objects Object Finder SDK Model IOdiModelFinder OdiModel Sub Model IOdiSubModel OdiSubModel DataStore IOdiDataStoreFinder OdiDataStore Column IOdiColumnFinder OdiColumn Key IOdiKeyFinder OdiKey Condition IOdiConditionFinder OdiCondition Operator Objects   Object Finder SDK Session Folder IOdiSessionFolderFinder OdiSessionFolder Session IOdiSessionFinder OdiSession Schedule OdiSchedule How to Create an Object? Here is a simple example to create a project, it uses IOdiEntityManager.persist to persist the object. import oracle.odi.domain.project.OdiProject; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) project = new OdiProject("Project For Demo", "PROJECT_DEMO") odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Update an Object? This update example uses the methods on the OdiProject object to change the project’s name that was created above, it is then persisted. import oracle.odi.domain.project.OdiProject; import oracle.odi.domain.project.finder.IOdiProjectFinder; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) prjFinder = (IOdiProjectFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiProject.class); project = prjFinder.findByCode("PROJECT_DEMO"); project.setName("A Demo Project"); odiInstance.getTransactionalEntityManager().persist(project) tm.commit(txnStatus) How to Delete an Object? Here is a simple example to delete all of the sessions, it uses IOdiEntityManager.remove to delete the object. import oracle.odi.domain.runtime.session.finder.IOdiSessionFinder; import oracle.odi.domain.runtime.session.OdiSession; import oracle.odi.core.persistence.transaction.support.DefaultTransactionDefinition; txnDef = new DefaultTransactionDefinition(); tm = odiInstance.getTransactionManager() txnStatus = tm.getTransaction(txnDef) sessFinder = (IOdiSessionFinder)odiInstance.getTransactionalEntityManager().getFinder(OdiSession.class); sessc = sessFinder.findAll(); sessItr = sessc.iterator() while (sessItr.hasNext()) {   sess = (OdiSession) sessItr.next()   odiInstance.getTransactionalEntityManager().remove(sess) } tm.commit(txnStatus) This isn't an all encompassing summary of the SDK, but covers a lot of the content to give you a good handle on the objects and how they work. For details of how specific complex objects are created via the SDK, its best to look at postings such as the interface builder posting here. Have fun, happy coding!

    Read the article

  • How to use crontab, .netrc, and git push?

    - by Jon
    Hi all, I am in the process of automating the backups from various servers to a central point then pushing those config changes into a git repo so i can track any changes over time. The rest of the scripts are working well, I can copy / rsync the files across the network to a central point. The last script is to get the config files to be put into / updated in repository. The script is as follows: #!/bin/bash clear SERVERNAME="betty" SCRIPTDIR="/home/jon" GITROOT="/tmp/git" TEMPROOT="/tmp/backups" BACKUPROOTDIR="/mnt/backups" echo " - running as user: $UID" echo "backingup git config on $SERVERNAME" echo "" # check to see if root backup folder exists, otherwise create it. if [ -d $GITROOT ]; then rm -rf $GITROOT fi mkdir $GITROOT cd $GITROOT echo " - testing if home is where I think it should be!" echo $HOME echo " - testing if it can see netrc" tail $HOME/.netrc git clone http://192.168.10.97:8000/repositories/HOH-config-backups.git cd HOH-config-backups echo " - copy Configuration Folders across" cp -r $BACKUPROOTDIR/Configuration/* $GITROOT/HOH-config-backups/ cp -r $BACKUPROOTDIR/scripts $GITROOT/HOH-config-backups/ git add . git commit -a -m "committing any new configuration changes!" git push origin master echo "" echo "Git repo updated" echo "" echo " - backing up this script" FIREWIGSCRIPTLOC="$BACKUPROOTDIR/scripts/$SERVERNAME" if [ ! -d $FIREWIGSCRIPTLOC ]; then mkdir $FIREWIGSCRIPTLOC fi cp /home/jon/gitConfig.sh $FIREWIGSCRIPTLOC The git repo is on a different machine in the network using Apache and HTTP-backend.exe (smart HTTP protocol). If I run this script as me "jon" it works. If I run it in crontab it fails. git uses the /home/jon/.netrc file for authentication: machine 192.168.10.97 login gitconfig password 1234579 The log from crontab is: TERM environment variable not set. - running as user: 1000 backingup git config on betty - testing if home is where I think it should be! /home/jon - testing if it can see netrc machine 192.168.10.97 login gitconfig password 1234579 got 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd walk 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd got be880f2d306778a538d592e7a02eb19f416612f7 got bd387e8def9f77aafa798bf53e80d949aba443e8 got 1bc1a59e12775841d4c59d77c63b8a73823138c2 walk bd387e8def9f77aafa798bf53e80d949aba443e8 Getting alternates list for http://192.168.10.97:8000/repositories/HOH-config-backups.git got 030512237bca72faf211e0e8ec2906164eac34f6 got 9bc2f575240bc1f61ff7d69777ce1a165d06b184 got b8400f7f01429104a9d4786a6bb1a16d293e37c1 got 2403b5bf611010e0b401f776f0e23b09ce744838 got 1a27944c48269ef3608a8f2466e43402d06faac0 got b686f45b7d57af4fa8ca0d528bb85216d6247e19 Getting pack list for http://192.168.10.97:8000/repositories/HOH-config-backups.git Getting index for pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff Getting pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff which contains ff84d6d48e9326066438d167a10251218d612b3d walk b686f45b7d57af4fa8ca0d528bb85216d6247e19 got 364e30daec17814073e668f490bb84af891fe1f7 got 23f6497e7f9b80e0d90adad73bd0407a0e5ac6ce got 9e77c47574b5e23ea669afe0c23ab235e4917ee1 got 6654e0d328a216b3783e98c47206cb2d01b3353d got 28821ffd437d2689ffb82c6e4b9c3f5372c95c4b got 8c384a24f645389e4d4b08013c79e9e73a658342 got d203be0123736ee025ce20c081f1489098648dfc got 1852603bf7709e71417d8ccec02390279d533642 got fb753a26b20b04694419fce8ecdaa8dbec105cf1 got 736028997cd84dd1c135f57e9d246674b9cd0b9d got 7af836249e20096d0476a548d5be702a071cdd4b got 240dc39d9db50df63073fc7927b2d002dfa0f54c got 93abd36e3935a01011eb753b635a1a0e984bf31e got c6269e28fecf4d8d0d98b9358aecb3acff02df44 got b0aa29432f73e64032682a351d436c24b14078ab walk 240dc39d9db50df63073fc7927b2d002dfa0f54c got 58fb66d9f35f8a5e32ff4683309c5f0c2a3a03c5 got 0da2def4de0565483cdbe6b87418ee2beb122e58 got 0f6a86c6f87ed52ad2ed01e5c6edd661d364930c got 437a93d27b5bb89c739a0564a34a616e832c3ebe got fe0385abe5c0acd8462268dac330bae00e934f1b got 24259f8f5c5c9ee974a75fe3d1e07c02e3e20fe9 got d29f624bf1a5eceedaa86c10fee35f62747c7d04 got 0154e4c987132585ea7a92b77d02dba285512d6b got eda8bf526567c25ee70addb2ad3c3c6aa57eac77 got 9f3d9d7262d66f9fa4f6a13b7c86199953f4bc4e got 8e20881e19667aa22245d0598646991067455a4d got abb1123145689b35eb19519952c71253ee45fa98 got dfeff593c79b4156ce2ce1adf043d0e80356488c got e20c5b48b1d360e0bcf34189e3f3d2bbf23e92cc got b13eb81cc274780322ecf786372320343926bec9 walk 8de83868b3fac748b0a55eba16c8f668ec852abb got b5961421bbc42afe7a07cc1c8b615aba26ba74d7 got 2650ba819019df4193b482733e29ca79b29f3f2c got b3111e1be8103e91803a97a817ed81f28025aca1 got b060be934d709684f5eb5dad3c03932a3589e864 got cf70d2043f081d7a4438e9d5a290a9f986c84060 got 80bf0f1cc836feab86d6935bb7968d8555a8d531 got da318d167920e34bc6573e4fc236249ccbbee316 got d82ac853d387b760149599e6e1ab96403f6ec672 got 0005f691d1f46550fdb4e56025f52e30a5b18cc2 Initialized empty Git repository in /tmp/git/HOH-config-backups/.git/ - copy Configuration Folders across Created commit 424df2f: committing any new configuration changes! 3 files changed, 55 insertions(+), 1 deletions(-) create mode 100755 scripts/betty/gitConfig.sh error: Cannot access URL http://192.168.10.97:8000/repositories/HOH-config-backups.git/, return code 22 error: failed to push some refs to 'http://192.168.10.97:8000/repositories/HOH-config-backups.git' Git repo updated - backing up this script cp: cannot create regular file `/mnt/backups/scripts/betty/gitConfig.sh': Permission denied my crontab is: # m h dom mon dow command 04 * * * * /home/jon/gitConfig.sh > /tmp/gitconfig.log 2>&1 I open it by doing: $crontab -e i.e. not as root. I am a bit confused as to why it is not running as my user (or what user id 1000 is). Not sure what I need to do to get the push with git to work within crontab. edit: found out about the userid: jon@betty:~$ id uid=1000(jon) gid=1000(jon) groups=4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),114(lpadmin),115(admin),1000(jon) here is my $HOME/.gitconfig file: [user] name = Jon Hawkins email = [email protected] Thanks

    Read the article

  • What's new in ASP.Net 4.5 and VS 2012 - part 2

    - by nikolaosk
    This is the second post in a series of posts titled "What's new in ASP.Net 4.5 and VS 2012".You can have a look at the first post in this series, here. Please find all my posts regarding VS 2012, here. In this post I will be looking into the various new features available in ASP.Net 4.5 and VS 2012.I will be looking into the enhancements in the HTML Editor,CSS Editor and Javascript Editor.In order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 2012 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine.I will work fine in Windows 7 as well so do not worry if you do not have the latest Microsoft operating system.1) Launch VS 2012 and create a new Web Forms application by going to File - >New Web Site - > ASP.Net Web Forms Site.2) Choose an appropriate name for your web site.3) I would like to point out the new enhancements in the CSS editor in VS 2012. In the Solution Explorer in the Content folder and open the Site.cssThen when I try to change the background-color property of the html element, I get a brand new handy color-picker. Have a look at the picture below  Please note that the color-picker shows all the colors that have been used in this website. Then you can expand the color-picker by clicking on the arrows. Opacity is also supported. Have a look at the picture below4) There are also mobile styles in the Site.css .These are based on media queries.Please have a look at another post of mine on CSS3 media queries. Have a look at the picture below In this case when the maximum width of the screen is less than 850px there will be a new layout that will be dictated by these new rules. Also CSS snippets are supported. Have a look at the picture below I am writing a new CSS rule for an image element. I write the property transform and hit tab and then I have cross-browser CSS handling all of the major vendors.Then I simply add the value rotate and it is applied to all the cross browser options.Have a look at the picture below.  I am sure you realise how productive you can become with all these CSS snippets. 5) Now let's have a look at the new HTML editor enhancements in VS 2012You can drag and drop a GridView web server control from the Toolbox in the Site.master file.You will see a smart tag (that was only available in the Design View) that you can expand and add fields, format the web server control.Have a look at the picture below 6) We also have available code snippets. I type <video and then press tab twice.By doing that I have the rest of the HTML 5 markup completed.Have a look at the picture below 7) I have new support for the input tag including all the HTML 5 types and all the new accessibility features.Have a look at the picture below   8) Another interesting feature is the new Intellisense capabilities. When I change the DocType to 4.01 and the type <audio>,<video> HTML 5 tags, Intellisense does not recognise them and add squiggly lines.Have a look at the picture below All these features support ASP.Net Web forms, ASP.Net MVC applications and Web Pages. 9) Finally I would like to show you the enhanced support that we have for Javascript in VS 2012. I have full Intellisense support and code snippets support.I create a sample javascript file. I type If and press tab. I type while and press tab.I type for and press tab.In all three cases code snippet support kicks in and completes the code stack. Have a look at the picture below We also have full Intellisense support.Have a look at the picture below I am creating a simple function and then type some sort of XML like comments for the input parameters. Have a look at the picture below. Then when I call this function, Intellisense has picked up the XML comments and shows the variables data types.Have a look at the picture below Hope it helps!!!

    Read the article

  • Excel Template Teaser

    - by Tim Dexter
    In lieu of some official documentation I'm in the process of putting together some posts on the new 10.1.3.4.1 Excel templates. No more HTML, maskerading as Excel; far more flexibility than Excel Analyzer and no need to write complex XSL templates to create the same output. Multi sheet outputs with macros and embeddable XSL commands are here. Their capabilities are pretty extensive and I have not worked on them for a few years since I helped put them together for EBS FSG users, so Im back on the learning curve. Let me say up front, there is no template builder, its a completely manual process to build them but, the results can be fantastic and provide yet another 'superstar' opportunity for you. The templates can take hierarchical XML data and walk the structure much like an RTF template. They use named cells/ranges and a hidden sheet to provide the rendering engine the hooks to drop the data in. As a taster heres the data and output I worked with on my first effort: <EMPLOYEES> <LIST_G_DEPT> <G_DEPT> <DEPARTMENT_ID>10</DEPARTMENT_ID> <DEPARTMENT_NAME>Administration</DEPARTMENT_NAME> <LIST_G_EMP> <G_EMP> <EMPLOYEE_ID>200</EMPLOYEE_ID> <EMP_NAME>Jennifer Whalen</EMP_NAME> <EMAIL>JWHALEN</EMAIL> <PHONE_NUMBER>515.123.4444</PHONE_NUMBER> <HIRE_DATE>1987-09-17T00:00:00.000-06:00</HIRE_DATE> <SALARY>4400</SALARY> </G_EMP> </LIST_G_EMP> <TOTAL_EMPS>1</TOTAL_EMPS> <TOTAL_SALARY>4400</TOTAL_SALARY> <AVG_SALARY>4400</AVG_SALARY> <MAX_SALARY>4400</MAX_SALARY> <MIN_SALARY>4400</MIN_SALARY> </G_DEPT> ... </LIST_G_DEPT> </EMPLOYEES> Structured XML coming from a data template, check out the data template progression post. I can then generate the following binary XLS file. There are few cool things to notice in this output. DEPARTMENT-EMPLOYEE master detail output. Not easy to do in the Excel analyzer. Date formatting - this is using an Excel function. Remember BIP generates XML dates in the canonical format. I have formatted the other data in the template using native Excel functionality Salary Total - although in the data I have calculated this in the template Conditional formatting - this is handled by Excel based on the incoming data Bursting department data across sheets and using the department name for the sheet name. This alone is worth the wait! there's more, but this is surely enough to whet your appetite. These new templates are already tucked away in EBS R12 under controlled release by the GL team and have now come to the BIEE and standalone releases in the 10.1.3.4.1+ rollup patch. For the rest of you, its going to be a bit of a waiting game for the relevant teams to uptake the latest BIP release. Look out for more soon with some explanation of how they work and how to put them together!

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >