Search Results

Search found 4701 results on 189 pages for 'soul master'.

Page 156/189 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • please explain these mongo statistics

    - by sivann
    My setup: I have 2 hosts, and 2 shards each. Host1 has 2 shards, and is the master of the replicas host2 has the secondaries of the 2 shards. . host1: shard1 (repset1),shard2 (repset2) host2: shard1 (repset1),shard2 (repset2) There's also a 3rd host that acts as arbitrer. I have 50 threads writing randomly to both shards (using a hash) via mongos with REPLICA_SAFE WriteConcern set on each insert. The questions: mongostat displays about 90% locked for both shards in host1 and about 1% locked on host2. Since I use REPLICA_SAFE which supposedly writes to both servers shouldn't the locks be the same? mongostat reports qr=30 for both shards of host1, and qw=0 always. Since I perform only writes how is this possible? Moreover on host2 all queues are reported 0. Faults are abut the same in all shards/hosts (arround 80). netIn/netOut on the secondaries (host2) are always about 200bytes/sec. Too low. mongos has 53 connections, host1's shards have 71 and 71 and host2's shards have 9 and 8. How is this? Please answer whatever you can. Thanks!

    Read the article

  • Grub Installation Failed: Fatal Error ... now what I do?

    - by eklavya
    I know there are some threads that touch this but I feel I have done something uniquely stupid. hence the post and plea for help. I am a beginner @ Linux. So I have a PC with a HDD (hard disk drive) and SSD (solid state drive) It was running Linux Mint /dev/sda1 - HDD Partition 1 - 2 TB (mounted this is /home /dev/sda2 - HDD Partition 2 - 1 TB (separate back up drive, i was backing up files to this) /dev/sdb1 - SSD Partition 1 - 100 GB (OS) /dev/sdb2 - SSD Partition 2 - 20 GB (Swap) The operating system was Linux Mint and was installed on the /dev/sdb1 i.e the solid state drive. I had partitioned off the sda into 2 TB and 1TB and presented the 2 TB as the /home to the OS. Anyway last night I decided to make a return to Ubuntu via the path of Elementary OS. Everything went fine with the install until it stated that GRUB installed failed and this was a Fatal error (no kidding I said). No I am stuck. I have definitely done something wrong and don't know what it is... My biggest pain is the files on the /dev/sda2. I want to save these before I try something drastic like wiping off the /dev/sda completely. So I have the following questions... Can I use a liveCD USB to save these files ? I can see the /dev/sda2 but was unable to access the files in the liveCD last not least ... how do I fix the main issue here. Why could the OS not install GRUB 2b... why is my SSD the /dev/sdb ... and not /dev/sda. Does that have something to do with it that my master boot record sits on the HDD /dev/sda and not /dev/sdb

    Read the article

  • "Synchronizing" files between local and remote server using Git

    - by ConcreteVitamin
    My intended goal: I maintain some files in my local computer, and I also share them with others by putting them on my website. In the past I did this by manually uploading all the files using FTP, every time I did some modifications etc. Now, I am wondering if I can use Git to help me achieve this (by "pushing" the local files to my website server). My server is hosted by Dreamhost. First Attempt: First, I try this tutorial. I first push my local files to my Github repo, and ssh into my Dreamhost server to clone --bare from the Github repo. But I find that git does not transfer my files. So I ignore the tutorial. Second Attempt: I ssh into my Dreamhost server to clone directly from Github. My files are all transfered to the server. Then, on my local computer, I git remote add dreamhost ssh://[email protected]/~/my-project. Then I add some files, and commit, and git push dreamhost master. And a bunch of errors appears: http://geotakucovi.com/gitError.jpg As a newbie Git user, I must have missed something. Please help!

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • What apps can you only get on Mac and not Windows?

    - by ytk
    What apps do you absolutely have to use a Mac to run, and there are no decent Windows PC equivalent? This is not a religious war. Please be specific and practical It doesn't have to be a direct 1-2-1 comparison, but overall usefulness to the task I'll start off with a few: KeyNote -- the animations are quite cool and not available in PowerPoint iTune's photo sync -- on Windows it makes copy of all the photos you want to sync, effectively double the space taken up by your photos. On a Mac it's easier as long as you use iPhoto Keychain -- a centralized password manager tied to the OS. The benefit of this is you don't have to set a Master Password (like Firefox) which you need to enter when starting the browser. And it doesn't reveal your password (like Chrome, which makes no effort in hiding the password you have stored in Options) Time Machine -- 0-configuration backup in the background. Easy interface for restoring a file, or even just a contact in the address book. Text-to-speech -- works in any program, and sounds better than Windows computer voice Quick View -- press space bar to preview a file. Windows95 had quick view, but was removed.

    Read the article

  • How to chain GRUB2 for Ubuntu 10.04 from Truecrypt & its bootloader (multi boot alongside Windows XP partition)?

    - by Rob
    I want Truecrypt to ask for password for Windows XP as usual but with the standard [ESC] option, on selecting that, i.e via Escape key, I want it to find the grub for the (unencrypted) Ubuntu install. I've installed Windows XP on the 120Gb hard drive of a Toshiba NB100 netbook then partitioned to make room for Ubuntu 10.04 and installed that after the Windows XP install. When I encrypt Windows XP, Truecrypt will overwrite the grub entry in the master boot record (MBR), I believe (?) and I won't be able to choose between XP and Ubuntu anymore. So I need to restore it back. I've searched fairly extensively for answers on Ubuntu forums and elsewhere but have not yet found a complete answer that covers all eventualities, scenarios and error messages, or otherwise they talk of legacy GRUB and not GRUB2. Ubuntu 10.04 uses GRUB2. My setup: Partitions: Windows XP, NTFS (to be encrypted with Truecrypt), 40Gb /boot (Ext4, 1Gb) Ubuntu swap, 4Gb Ubuntu / (root) - main filesystem (20gb) NTFS share, 55Gb I know that the Truecrypt boot loader replaces the GRUB when boot up because I've already tried it on another laptop. I want boot loader screen to look something like the usual: Truecrypt Enter password: (or [ESC] to skip) password is for WindowsXP and on pressing [ESC] for it to find the Ubuntu grub to boot from Thanks in advance for your help. The key area of the problem is how to instruct Truecrypt when escape key is pressed, and how the Grub/Ubuntu can be made visible to the truecrypt bootloader to find it, when the esc key is pressed. Also knowing as chaining.

    Read the article

  • Automatic switching of network card when vm is moved

    - by spock
    I have two hosts in a pool and I used to be able to move the vm around and they will start without any problem. But after I played around with some network setting, which I don't remember what, I started getting "This VM needs storage that cannot be seen from that server" message. As you can tell I am a beginner with Xenserver. Here is the very simple environment: 2 host servers with their own local hard disk and network card. One is a Pool master. Problem: Power off a vm and move vm from one server to another, or clone one vm to the other server. It used to be able to start up right away. Now, I need to delete one of the network that does not belong to the server, then it will start. Otherwise, the above error msg popup. The two networks (one for each network card in each host) are in the Networking tab of the vm, as well as in the host's networking tab. I googled but all I got to empty the DVD drive, which is not the problem here. Thanks in advance!

    Read the article

  • How to securely control access to a backend key server?

    - by andy
    I need to securely encrypt data in my database so that if the database is dumped, hackers are unable to decrypt the data. I'm planning on creating a simple key server on a different machine, and allowing the DB server access to it (restricted by IP address on the key server to permit the DB server). The key server would contain the key required to encrypt/decrypt data. However, if a hacker were able to get a shell on the DB server, they could request the key from the key server and therefore decrypt the data in the database. How could I prevent this (assuming all firewalls are in place, DB is not connected directly to the internet, etc)? i.e. is there some method I could use that could secure a request from the DB server to the key server so that even if a hacker had a shell on the DB server they'd be unable to make those same requests? Signed requests from the DB server could make issuing these requests less trivial - I suppose that'd help increase the amount of time it'd take to compromise the key server, something a hacker probably wouldn't have much of. As far as I can see, if someone can get a shell on the DB server everything's lost anyway. This could be mitigated by using one key per data item in the DB so at least there's not a single "master" key, but multiple keys that the hacker would need to access. What would be a secure method of ensuring requests from the DB server to the key server were authentic and could be trusted?

    Read the article

  • Git workflow for two tight-knit projects

    - by Pioul
    Two very similar projects I'm maintaining an online Markdown editor using Git as RCS (and accessorily made available on GitHub). From this web app, I've created a Chrome app: the code is the same, aside from some Chrome technicalities. I care about open sourcing these two projects. Still, the Chrome app's code being the same as the web app's except for some dull details, I've first chosen to (1) not publish the Chrome app on GitHub, and (2) not use Git to manage its code. Instead, I would manually review the web app's commits, then replicate the few changes in the Chrome app. … slightly drifting apart However, I've decided to add a feature to the Chrome app only. So, even though both codebases will remain broadly similar, they'll be diverging enough to make me reconsider the rationale behind my initial decision to not version control nor share the Chrome app's source code. Since I'm now willing to use Git to version control both apps, and that I want to share both of them on GitHub, how should I go about it? Should I use two different repositories, or one repo with two long-running branches? What would be the pros and cons of each approach in that context? What would be the easiest/fastest way to regularly "import" commits from the web app to the Chrome app, since the web app is going to remain the master branch? Is cherry-picking the only solution?

    Read the article

  • mysql cluster problem in ubuntu

    - by Firman
    I have a problem while installing and configuring mysql cluster runnign on ubuntu 10.10 This is configuration for Cluster management [NDBD DEFAULT] NoOfReplicas=2 DataMemory=10MB IndexMemory=25MB MaxNoOfTables=256 MaxNoOfOrderedIndexes=256 MaxNoOfUniqueHashIndexes=128 [MYSQLD DEFAULT] [NDB_MGMD DEFAULT] [TCP DEFAULT] [NDB_MGMD] Id=1 # the NDB Management Node (this one) HostName=192.168.10.101 [NDBD] Id=2 # the first NDB Data Node HostName=192.168.10.11 DataDir= /var/lib/mysql-cluster [NDBD] Id=3 # the second NDB Data Node HostName=192.168.10.12 DataDir=/var/lib/mysql-cluster [MYSQLD] [MYSQLD] and this is configuration for both node : [mysqld] ndbcluster ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER [mysql_cluster] ndb-connectstring=192.168.10.101 # the IP of the MANAGMENT (THIRD) SERVER After running all node and management, and I use ndb_mgm, the type 'show' command, and something appear like this : ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration --------------------- [ndbd(NDB)] 2 node(s) id=2 @192.168.10.11 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master) id=3 @192.168.10.12 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.10.101 (mysql-5.1.39 ndb-7.0.9) [mysqld(API)] 1 node(s) id=4 (not connected, accepting connect from 192.168.10.101) look at two last line.. not as what http://dev.mysql.com/tech-resources/articles/mysql-cluster-for-two-servers.html look like (see at point 4) anyone have ever had this problem ?

    Read the article

  • How to manage enterprise network of Linux machines?

    - by killy9999
    I work at the university. In my institute we have six computer laboratories used for teaching. Each lab has almost 20 computers, which gives over 100 machines total. Computers have either Windows XP or Windows 7 Eneterprise operating system. We use Symantec Ghost to manage all the computers. Each computer has a Ghost client installed, which allows to control computers over network. Every six months we restore a master image on one of the computers in a lab, update that image and distribute it over the network to all computers in a laboratory. Thanks to Ghost client this is done automatically with just a few clicks. Recently I suggested that it would be good to have Linux installed in the laboratories. The administrators were concerned that we would not be able to manage that many computers if each would have to be updated manually. The question is: how to manage such a huge network of Linux machines in an automated way? To make the description of our network more complete I'll add that all students have their accounts (about few thousand users) on a central server. These are accessed via LDAP. To use a computer in laboratory each student has to log in using his own account.

    Read the article

  • Added splash screen code to my package

    - by Youssef
    Please i need support to added splash screen code to my package /* * T24_Transformer_FormView.java */ package t24_transformer_form; import org.jdesktop.application.Action; import org.jdesktop.application.ResourceMap; import org.jdesktop.application.SingleFrameApplication; import org.jdesktop.application.FrameView; import org.jdesktop.application.TaskMonitor; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.filechooser.FileNameExtensionFilter; import javax.swing.filechooser.FileFilter; // old T24 Transformer imports import java.io.File; import java.io.FileWriter; import java.io.StringWriter; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.Iterator; //import java.util.Properties; import java.util.StringTokenizer; import javax.swing.; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Result; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; import javax.xml.transform.dom.DOMSource; import javax.xml.transform.stream.StreamResult; import org.apache.log4j.Logger; import org.apache.log4j.PropertyConfigurator; import org.w3c.dom.Document; import org.w3c.dom.DocumentFragment; import org.w3c.dom.Element; import org.w3c.dom.Node; import org.w3c.dom.NodeList; import com.ejada.alinma.edh.xsdtransform.util.ConfigKeys; import com.ejada.alinma.edh.xsdtransform.util.XSDElement; import com.sun.org.apache.xml.internal.serialize.OutputFormat; import com.sun.org.apache.xml.internal.serialize.XMLSerializer; /* * The application's main frame. */ public class T24_Transformer_FormView extends FrameView { /**} * static holders for application-level utilities * { */ //private static Properties appProps; private static Logger appLogger; /** * */ private StringBuffer columnsCSV = null; private ArrayList<String> singleValueTableColumns = null; private HashMap<String, String> multiValueTablesSQL = null; private HashMap<Object, HashMap<String, Object>> groupAttrs = null; private ArrayList<XSDElement> xsdElementsList = null; /** * initialization */ private void init() /*throws Exception*/ { // init the properties object //FileReader in = new FileReader(appConfigPropsPath); //appProps.load(in); // log4j.properties constant String PROP_LOG4J_CONFIG_FILE = "log4j.properties"; // init the logger if ((PROP_LOG4J_CONFIG_FILE != null) && (!PROP_LOG4J_CONFIG_FILE.equals(""))) { PropertyConfigurator.configure(PROP_LOG4J_CONFIG_FILE); if (appLogger == null) { appLogger = Logger.getLogger(T24_Transformer_FormView.class.getName()); } appLogger.info("Application initialization successful."); } columnsCSV = new StringBuffer(ConfigKeys.FIELD_TAG + "," + ConfigKeys.FIELD_NUMBER + "," + ConfigKeys.FIELD_DATA_TYPE + "," + ConfigKeys.FIELD_FMT + "," + ConfigKeys.FIELD_LEN + "," + ConfigKeys.FIELD_INPUT_LEN + "," + ConfigKeys.FIELD_GROUP_NUMBER + "," + ConfigKeys.FIELD_MV_GROUP_NUMBER + "," + ConfigKeys.FIELD_SHORT_NAME + "," + ConfigKeys.FIELD_NAME + "," + ConfigKeys.FIELD_COLUMN_NAME + "," + ConfigKeys.FIELD_GROUP_NAME + "," + ConfigKeys.FIELD_MV_GROUP_NAME + "," + ConfigKeys.FIELD_JUSTIFICATION + "," + ConfigKeys.FIELD_TYPE + "," + ConfigKeys.FIELD_SINGLE_OR_MULTI + System.getProperty("line.separator")); singleValueTableColumns = new ArrayList<String>(); singleValueTableColumns.add(ConfigKeys.COLUMN_XPK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); multiValueTablesSQL = new HashMap<String, String>(); groupAttrs = new HashMap<Object, HashMap<String, Object>>(); xsdElementsList = new ArrayList<XSDElement>(); } /** * initialize the <code>DocumentBuilder</code> and read the XSD file * * @param docPath * @return the <code>Document</code> object representing the read XSD file */ private Document retrieveDoc(String docPath) { Document xsdDoc = null; File file = new File(docPath); try { DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); xsdDoc = builder.parse(file); } catch (Exception e) { appLogger.error(e.getMessage()); } return xsdDoc; } /** * perform the iteration/modification on the document * iterate to the level which contains all the elements (Single-Value, and Groups) and start processing each * * @param xsdDoc * @return */ private Document processDoc(Document xsdDoc) { ArrayList<Object> newElementsList = new ArrayList<Object>(); HashMap<String, Object> docAttrMap = new HashMap<String, Object>(); Element sequenceElement = null; Element schemaElement = null; // get document's root element NodeList nodes = xsdDoc.getChildNodes(); for (int i = 0; i < nodes.getLength(); i++) { if (ConfigKeys.TAG_SCHEMA.equals(nodes.item(i).getNodeName())) { schemaElement = (Element) nodes.item(i); break; } } // process the document (change single-value elements, collect list of new elements to be added) for (int i1 = 0; i1 < schemaElement.getChildNodes().getLength(); i1++) { Node childLevel1 = (Node) schemaElement.getChildNodes().item(i1); // <ComplexType> element if (childLevel1.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { // first, get the main attributes and put it in the csv file for (int i6 = 0; i6 < childLevel1.getChildNodes().getLength(); i6++) { Node child6 = childLevel1.getChildNodes().item(i6); if (ConfigKeys.TAG_ATTRIBUTE.equals(child6.getNodeName())) { if (child6.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { String attrName = child6.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); if (((Element) child6).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).getLength() != 0) { Node simpleTypeElement = ((Element) child6).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE) .item(0); if (((Element) simpleTypeElement).getElementsByTagName(ConfigKeys.TAG_RESTRICTION).getLength() != 0) { Node restrictionElement = ((Element) simpleTypeElement).getElementsByTagName( ConfigKeys.TAG_RESTRICTION).item(0); if (((Element) restrictionElement).getElementsByTagName(ConfigKeys.TAG_MAX_LENGTH).getLength() != 0) { Node maxLengthElement = ((Element) restrictionElement).getElementsByTagName( ConfigKeys.TAG_MAX_LENGTH).item(0); HashMap<String, String> elementProperties = new HashMap<String, String>(); elementProperties.put(ConfigKeys.FIELD_TAG, attrName); elementProperties.put(ConfigKeys.FIELD_NUMBER, "0"); elementProperties.put(ConfigKeys.FIELD_DATA_TYPE, ConfigKeys.DATA_TYPE_XSD_STRING); elementProperties.put(ConfigKeys.FIELD_FMT, ""); elementProperties.put(ConfigKeys.FIELD_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_SHORT_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_COLUMN_NAME, attrName); elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "S"); elementProperties.put(ConfigKeys.FIELD_LEN, maxLengthElement.getAttributes().getNamedItem( ConfigKeys.ATTR_VALUE).getNodeValue()); elementProperties.put(ConfigKeys.FIELD_INPUT_LEN, maxLengthElement.getAttributes() .getNamedItem(ConfigKeys.ATTR_VALUE).getNodeValue()); constructElementRow(elementProperties); // add the attribute as a column in the single-value table singleValueTableColumns.add(attrName + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_STRING + ConfigKeys.DELIMITER_COLUMN_TYPE + maxLengthElement.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE).getNodeValue()); // add the attribute as an element in the elements list addToElementsList(attrName, attrName); appLogger.debug("added attribute: " + attrName); } } } } } } // now, loop on the elements and process them for (int i2 = 0; i2 < childLevel1.getChildNodes().getLength(); i2++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(i2); // <Sequence> element if (childLevel2.getNodeName().equals(ConfigKeys.TAG_SEQUENCE)) { sequenceElement = (Element) childLevel2; for (int i3 = 0; i3 < childLevel2.getChildNodes().getLength(); i3++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(i3); // <Element> element if (childLevel3.getNodeName().equals(ConfigKeys.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { processGroup(childLevel3, true, null, null, docAttrMap, xsdDoc, newElementsList); // insert a new comment node with the contents of the group tag sequenceElement.insertBefore(xsdDoc.createComment(serialize(childLevel3)), childLevel3); // remove the group tag sequenceElement.removeChild(childLevel3); } else { processElement(childLevel3); } } } } } } } // add new elements // this step should be after finishing processing the whole document. when you add new elements to the document // while you are working on it, those new elements will be included in the processing. We don't need that! for (int i = 0; i < newElementsList.size(); i++) { sequenceElement.appendChild((Element) newElementsList.get(i)); } // write the new required attributes to the schema element Iterator<String> attrIter = docAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) docAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(ConfigKeys.TAG_ATTRIBUTE); appLogger.debug("appending attr. [" + attr.getAttribute(ConfigKeys.ATTR_NAME) + "]..."); newAttrElement.setAttribute(ConfigKeys.ATTR_NAME, attr.getAttribute(ConfigKeys.ATTR_NAME)); newAttrElement.setAttribute(ConfigKeys.ATTR_TYPE, attr.getAttribute(ConfigKeys.ATTR_TYPE)); schemaElement.appendChild(newAttrElement); } return xsdDoc; } /** * add a new <code>XSDElement</code> with the given <code>name</code> and <code>businessName</code> to * the elements list * * @param name * @param businessName */ private void addToElementsList(String name, String businessName) { xsdElementsList.add(new XSDElement(name, businessName)); } /** * add the given <code>XSDElement</code> to the elements list * * @param element */ private void addToElementsList(XSDElement element) { xsdElementsList.add(element); } /** * check if the <code>element</code> sent is single-value element or group * element. the comparison depends on the children of the element. if found one of type * <code>ComplexType</code> then it's a group element, and if of type * <code>SimpleType</code> then it's a single-value element * * @param element * @return <code>true</code> if the element is a group element, * <code>false</code> otherwise */ private boolean isGroup(Node element) { for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node child = (Node) element.getChildNodes().item(i); if (child.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { // found a ComplexType child (Group element) return true; } else if (child.getNodeName().equals(ConfigKeys.TAG_SIMPLE_TYPE)) { // found a SimpleType child (Single-Value element) return false; } } return false; /* String attrName = null; if (element.getAttributes() != null) { Node attribute = element.getAttributes().getNamedItem(XSDTransformer.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); } } if (attrName.startsWith("g")) { // group element return true; } else { // single element return false; } */ } /** * process a group element. recursively, process groups till no more group elements are found * * @param element * @param isFirstLevelGroup * @param attrMap * @param docAttrMap * @param xsdDoc * @param newElementsList */ private void processGroup(Node element, boolean isFirstLevelGroup, Node parentGroup, XSDElement parentGroupElement, HashMap<String, Object> docAttrMap, Document xsdDoc, ArrayList<Object> newElementsList) { String elementName = null; HashMap<String, Object> groupAttrMap = new HashMap<String, Object>(); HashMap<String, Object> parentGroupAttrMap = new HashMap<String, Object>(); XSDElement groupElement = null; if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing group [" + elementName + "]..."); groupElement = new XSDElement(elementName, elementName); // get the attributes if a non-first-level-group // attributes are: groups's own attributes + parent group's attributes if (!isFirstLevelGroup) { // get the current element (group) attributes for (int i1 = 0; i1 < element.getChildNodes().getLength(); i1++) { if (ConfigKeys.TAG_COMPLEX_TYPE.equals(element.getChildNodes().item(i1).getNodeName())) { Node complexTypeNode = element.getChildNodes().item(i1); for (int i2 = 0; i2 < complexTypeNode.getChildNodes().getLength(); i2++) { if (ConfigKeys.TAG_ATTRIBUTE.equals(complexTypeNode.getChildNodes().item(i2).getNodeName())) { appLogger.debug("add group attr: " + ((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME)); groupAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); docAttrMap.put(((Element) complexTypeNode.getChildNodes().item(i2)).getAttribute(ConfigKeys.ATTR_NAME), complexTypeNode.getChildNodes().item(i2)); } } } } // now, get the parent's attributes parentGroupAttrMap = groupAttrs.get(parentGroup); if (parentGroupAttrMap != null) { Iterator<String> iter = parentGroupAttrMap.keySet().iterator(); while (iter.hasNext()) { String attrName = iter.next(); groupAttrMap.put(attrName, parentGroupAttrMap.get(attrName)); } } // add the attributes to the group element that will be added to the elements list Iterator<String> itr = groupAttrMap.keySet().iterator(); while(itr.hasNext()) { groupElement.addAttribute(itr.next()); } // put the attributes in the attributes map groupAttrs.put(element, groupAttrMap); } for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_COMPLEX_TYPE)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_SEQUENCE)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_ELEMENT)) { // check if single element or group if (isGroup(childLevel3)) { // another group element.. // unfortunately, a recursion is // needed here!!! :-( processGroup(childLevel3, false, element, groupElement, docAttrMap, xsdDoc, newElementsList); } else { // reached a single-value element.. copy it under the // main sequence and apply the name<>shorname replacement processGroupElement(childLevel3, element, groupElement, isFirstLevelGroup, xsdDoc, newElementsList); } } } } } } } if (isFirstLevelGroup) { addToElementsList(groupElement); } else { parentGroupElement.addChild(groupElement); } appLogger.debug("finished processing group [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. replace the <code>name</code> attribute with the <code>shortname</code>. * * @param element */ private void processElement(Node element) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); for (int i = 0; i < element.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) element.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue()); if (attrName.equals(ConfigKeys.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } } } } } } } } } // replace the name attribute with the shortname if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).setNodeValue(fieldShortName); } elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "S"); constructElementRow(elementProperties); singleValueTableColumns.add(fieldShortName + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldInputLength); // add the element to elements list addToElementsList(fieldShortName, fieldColumnName); appLogger.debug("finished processing element [" + elementName + "]."); } /** * process the sent <code>element</code> to extract/modify required * information: * 1. copy the element under the main sequence * 2. replace the <code>name</code> attribute with the <code>shortname</code>. * 3. add the attributes of the parent groups (if non-first-level-group) * * @param element */ private void processGroupElement(Node element, Node parentGroup, XSDElement parentGroupElement, boolean isFirstLevelGroup, Document xsdDoc, ArrayList<Object> newElementsList) { String fieldShortName = null; String fieldColumnName = null; String fieldDataType = null; String fieldFormat = null; String fieldInputLength = null; String elementName = null; Element newElement = null; HashMap<String, String> elementProperties = new HashMap<String, String>(); ArrayList<String> tableColumns = new ArrayList<String>(); HashMap<String, Object> groupAttrMap = null; if (element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { elementName = element.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).getNodeValue(); } appLogger.debug("processing element [" + elementName + "]..."); // 1. copy the element newElement = (Element) element.cloneNode(true); newElement.setAttribute(ConfigKeys.ATTR_MAX_OCCURS, "unbounded"); // 2. if non-first-level-group, replace the element's SimpleType tag with a ComplexType tag if (!isFirstLevelGroup) { if (((Element) newElement).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).getLength() != 0) { // there should be only one tag of SimpleType Node simpleTypeNode = ((Element) newElement).getElementsByTagName(ConfigKeys.TAG_SIMPLE_TYPE).item(0); // create the new ComplexType element Element complexTypeNode = xsdDoc.createElement(ConfigKeys.TAG_COMPLEX_TYPE); complexTypeNode.setAttribute(ConfigKeys.ATTR_MIXED, "true"); // get the list of attributes for the parent group groupAttrMap = groupAttrs.get(parentGroup); Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while(attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); Element newAttrElement = xsdDoc.createElement(ConfigKeys.TAG_ATTRIBUTE); appLogger.debug("adding attr. [" + attr.getAttribute(ConfigKeys.ATTR_NAME) + "]..."); newAttrElement.setAttribute(ConfigKeys.ATTR_REF, attr.getAttribute(ConfigKeys.ATTR_NAME)); newAttrElement.setAttribute(ConfigKeys.ATTR_USE, "optional"); complexTypeNode.appendChild(newAttrElement); } // replace the old SimpleType node with the new ComplexType node newElement.replaceChild(complexTypeNode, simpleTypeNode); } } // 3. replace the name with the shortname in the new element for (int i = 0; i < newElement.getChildNodes().getLength(); i++) { Node childLevel1 = (Node) newElement.getChildNodes().item(i); if (childLevel1.getNodeName().equals(ConfigKeys.TAG_ANNOTATION)) { for (int j = 0; j < childLevel1.getChildNodes().getLength(); j++) { Node childLevel2 = (Node) childLevel1.getChildNodes().item(j); if (childLevel2.getNodeName().equals(ConfigKeys.TAG_APP_INFO)) { for (int k = 0; k < childLevel2.getChildNodes().getLength(); k++) { Node childLevel3 = (Node) childLevel2.getChildNodes().item(k); if (childLevel3.getNodeName().equals(ConfigKeys.TAG_HAS_PROPERTY)) { if (childLevel3.getAttributes() != null) { String attrName = null; Node attribute = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME); if (attribute != null) { attrName = attribute.getNodeValue(); elementProperties.put(attrName, childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue()); if (attrName.equals(ConfigKeys.FIELD_SHORT_NAME)) { fieldShortName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_COLUMN_NAME)) { fieldColumnName = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_DATA_TYPE)) { fieldDataType = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_FMT)) { fieldFormat = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } else if (attrName.equals(ConfigKeys.FIELD_INPUT_LEN)) { fieldInputLength = childLevel3.getAttributes().getNamedItem(ConfigKeys.ATTR_VALUE) .getNodeValue(); } } } } } } } } } if (newElement.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME) != null) { newElement.getAttributes().getNamedItem(ConfigKeys.ATTR_NAME).setNodeValue(fieldShortName); } // 4. save the new element to be added to the sequence list newElementsList.add(newElement); elementProperties.put(ConfigKeys.FIELD_SINGLE_OR_MULTI, "M"); constructElementRow(elementProperties); // create the MULTI-VALUE table // 0. Primary Key tableColumns.add(ConfigKeys.COLUMN_XPK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_STRING + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.COLUMN_XPK_ROW_LENGTH); // 1. foreign key tableColumns.add(ConfigKeys.COLUMN_FK_ROW + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); // 2. field value tableColumns.add(fieldShortName + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldDataType + fieldFormat + ConfigKeys.DELIMITER_COLUMN_TYPE + fieldInputLength); // 3. attributes if (groupAttrMap != null) { Iterator<String> attrIter = groupAttrMap.keySet().iterator(); while (attrIter.hasNext()) { Element attr = (Element) groupAttrMap.get(attrIter.next()); tableColumns.add(attr.getAttribute(ConfigKeys.ATTR_NAME) + ConfigKeys.DELIMITER_COLUMN_TYPE + ConfigKeys.DATA_TYPE_XSD_NUMERIC); } } multiValueTablesSQL.put(sub_table_prefix.getText() + fieldShortName, constructMultiValueTableSQL( sub_table_prefix.getText() + fieldShortName, tableColumns)); // add the element to it's parent group children parentGroupElement.addChild(new XSDElement(fieldShortName, fieldColumnName)); appLogger.debug("finished processing element [" + elementName + "]."); } /** * write resulted files * * @param xsdDoc * @param docPath */ private void writeResults(Document xsdDoc, String resultsDir, String newXSDFileName, String csvFileName) { String rsDir = resultsDir + File.separator + new SimpleDateFormat("yyyyMMdd-HHmm").format(new Date()); try { File resultsDirFile = new File(rsDir); if (!resultsDirFile.exists()) { resultsDirFile.mkdirs(); } // write the XSD doc appLogger.info("writing the transformed XSD..."); Source source = new DOMSource(xsdDoc); Result result = new StreamResult(rsDir + File.separator + newXSDFileName); Transformer xformer = TransformerFactory.newInstance().newTransformer(); // xformer.setOutputProperty("indent", "yes"); xformer.transform(source, result); appLogger.info("finished writing the transformed XSD."); // write the CSV columns file appLogger.info("writing the CSV file..."); FileWriter csvWriter = new FileWriter(rsDir + File.separator + csvFileName); csvWriter.write(columnsCSV.toString()); csvWriter.close(); appLogger.info("finished writing the CSV file."); // write the master single-value table appLogger.info("writing the creation script for master table (single-values)..."); FileWriter masterTableWriter = new FileWriter(rsDir + File.separator + main_edh_table_name.getText() + ".sql"); masterTableWriter.write(constructSingleValueTableSQL(main_edh_table_name.getText(), singleValueTableColumns)); masterTableWriter.close(); appLogger.info("finished writing the creation script for master table (single-values)."); // write the multi-value tables sql appLogger.info("writing the creation script for slave tables (multi-values)..."); Iterator<String> iter = multiValueTablesSQL.keySet().iterator(); while (iter.hasNext()) { String tableName = iter.next(); String sql = multiValueTablesSQL.get(tableName); FileWriter tableSQLWriter = new FileWriter(rsDir + File.separator + tableName + ".sql"); tableSQLWriter.write(sql); tableSQLWriter.close(); } appLogger.info("finished writing the creation script for slave tables (multi-values)."); // write the single-value view appLogger.info("writing the creation script for single-value selection view..."); FileWriter singleValueViewWriter = new FileWriter(rsDir + File.separator + view_name_single.getText() + ".sql"); singleValueViewWriter.write(constructViewSQL(ConfigKeys.SQL_VIEW_SINGLE)); singleValueViewWriter.close(); appLogger.info("finished writing the creation script for single-value selection view."); // debug for (int i = 0; i < xsdElementsList.size(); i++) { getMultiView(xsdElementsList.get(i)); /*// if (xsdElementsList.get(i).getAllDescendants() != null) { // for (int j = 0; j < xsdElementsList.get(i).getAllDescendants().size(); j++) { // appLogger.debug(main_edh_table_name.getText() + "." + ConfigKeys.COLUMN_XPK_ROW // + "=" + xsdElementsList.get(i).getAllDescendants().get(j).getName() + "." + ConfigKeys.COLUMN_FK_ROW); // } // } */ } } catch (Exception e) { appLogger.error(e.getMessage()); } } private String getMultiView(XSDElement element)

    Read the article

  • Including an embedded framework using a cross-project-reference: Header no such file or directory

    - by d11wtq
    I'm trying to create a Cocoa framework by using a cross-project reference in Xcode. I have 2 projects: one for the framework; one for the application that will use the framework. This framework is not intended to be stored within the system; it is an embedded framework that lives within the application bundle. I have successfully made the cross-project reference, marked the framework as being a dependency of my target, added a Copy Files build phase that puts the framework in Contents/Frameworks/ and added the framework to the linker phase (I checked the little "Target" checkbox; I've also done it manually by dragging the framework into the linker phase). My framework's install directory is correctly set to @executable_path/../Frameworks. However, when I try to build my app it: a) Correctly builds the framework first b) Correctly copies the framework c) Errors because it cannot find the master header file in my framework I have verified that the header is there. I can see it in the app product that is partially built. ls build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework/Headers/Cioccolata.h build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework/Headers/Cioccolata.h I have been able to successfully build the app by copying my framework into /Library/Frameworks (I can then delete it again after the successful build), but this is a workaround, I'm looking to find it out why Xcode doesn't find the framework's master header file without it being copied to a system directory. Is copying it to the app bundle during the build not sufficient? Here's the full build transcript if it's any help (it's just a Hello World app right now, so not much going on here): Build Cioccolata of project Cioccolata with configuration Debug SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/Current A cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf A /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/Current SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Resources Versions/Current/Resources cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Resources /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Resources SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Headers Versions/Current/Headers cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Headers /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Headers SymLink /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Cioccolata Versions/Current/Cioccolata cd /Users/chris/Projects/Mac/Cioccolata /bin/ln -sf Versions/Current/Cioccolata /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Cioccolata ProcessInfoPlistFile /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/Info.plist Info.plist cd /Users/chris/Projects/Mac/Cioccolata builtin-infoPlistUtility Info.plist -expandbuildsettings -platform macosx -o /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/Info.plist CpHeader build/Debug/Cioccolata.framework/Versions/A/Headers/CWHelloWorld.h CWHelloWorld.h cd /Users/chris/Projects/Mac/Cioccolata /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.h /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Headers CpHeader build/Debug/Cioccolata.framework/Versions/A/Headers/Cioccolata.h Cioccolata.h cd /Users/chris/Projects/Mac/Cioccolata /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/Cioccolata.h /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Headers CopyStringsFile /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/English.lproj/InfoPlist.strings English.lproj/InfoPlist.strings cd /Users/chris/Projects/Mac/Cioccolata setenv ICONV /usr/bin/iconv /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copystrings --validate --inputencoding utf-8 --outputencoding UTF-16 English.lproj/InfoPlist.strings --outdir /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Resources/English.lproj ProcessPCH /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch.gch Cioccolata_Prefix.pch normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/Cioccolata setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c-header -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-generated-files.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-own-target-headers.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-project-headers.hmap -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -I/Users/chris/Projects/Mac/Cioccolata/build/Debug/include -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources -c /Users/chris/Projects/Mac/Cioccolata/Cioccolata_Prefix.pch -o /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch.gch CompileC build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/CWHelloWorld.o /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.m normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/Cioccolata setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-generated-files.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-own-target-headers.hmap -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Cioccolata-project-headers.hmap -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -I/Users/chris/Projects/Mac/Cioccolata/build/Debug/include -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/DerivedSources -include /var/folders/Xy/Xy-bvnxtFpiYBQPED0dK1++++TI/-Caches-/com.apple.Xcode.501/SharedPrecompiledHeaders/Cioccolata_Prefix-dololiigmwjzkgenggebqtpvbauu/Cioccolata_Prefix.pch -c /Users/chris/Projects/Mac/Cioccolata/CWHelloWorld.m -o /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/CWHelloWorld.o Ld /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Cioccolata normal i386 cd /Users/chris/Projects/Mac/Cioccolata setenv MACOSX_DEPLOYMENT_TARGET 10.5 /Developer/usr/bin/gcc-4.2 -arch i386 -dynamiclib -isysroot /Developer/SDKs/MacOSX10.5.sdk -L/Users/chris/Projects/Mac/Cioccolata/build/Debug -F/Users/chris/Projects/Mac/Cioccolata/build/Debug -filelist /Users/chris/Projects/Mac/Cioccolata/build/Cioccolata.build/Debug/Cioccolata.build/Objects-normal/i386/Cioccolata.LinkFileList -install_name @executable_path/../Frameworks/Cioccolata.framework/Versions/A/Cioccolata -mmacosx-version-min=10.5 -framework Foundation -single_module -compatibility_version 1 -current_version 1 -o /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework/Versions/A/Cioccolata Touch /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework cd /Users/chris/Projects/Mac/Cioccolata /usr/bin/touch -c /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework Build CioccolataTest of project CioccolataTest with configuration Debug ProcessInfoPlistFile /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Info.plist Info.plist cd /Users/chris/Projects/Mac/CioccolataTest builtin-infoPlistUtility Info.plist -expandbuildsettings -platform macosx -o /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Info.plist PBXCp build/Debug/CioccolataTest.webapp/Contents/Frameworks/Cioccolata.framework /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework cd /Users/chris/Projects/Mac/CioccolataTest /Developer/Library/PrivateFrameworks/DevToolsCore.framework/Resources/pbxcp -exclude .DS_Store -exclude CVS -exclude .svn -resolve-src-symlinks /Users/chris/Projects/Mac/Cioccolata/build/Debug/Cioccolata.framework /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Frameworks CopyStringsFile /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Resources/English.lproj/InfoPlist.strings English.lproj/InfoPlist.strings cd /Users/chris/Projects/Mac/CioccolataTest setenv ICONV /usr/bin/iconv /Developer/Library/Xcode/Plug-ins/CoreBuildTasks.xcplugin/Contents/Resources/copystrings --validate --inputencoding utf-8 --outputencoding UTF-16 English.lproj/InfoPlist.strings --outdir /Users/chris/Projects/Mac/CioccolataTest/build/Debug/CioccolataTest.webapp/Contents/Resources/English.lproj CompileC build/CioccolataTest.build/Debug/CioccolataTest.build/Objects-normal/i386/main.o main.m normal i386 objective-c com.apple.compilers.gcc.4_2 cd /Users/chris/Projects/Mac/CioccolataTest setenv LANG en_US.US-ASCII /Developer/usr/bin/gcc-4.2 -x objective-c -arch i386 -fmessage-length=0 -pipe -std=gnu99 -Wno-trigraphs -fpascal-strings -fasm-blocks -O0 -Wreturn-type -Wunused-variable -isysroot /Developer/SDKs/MacOSX10.5.sdk -mfix-and-continue -mmacosx-version-min=10.5 -gdwarf-2 -iquote /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-generated-files.hmap -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-own-target-headers.hmap -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-all-target-headers.hmap -iquote /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/CioccolataTest-project-headers.hmap -F/Users/chris/Projects/Mac/CioccolataTest/build/Debug -I/Users/chris/Projects/Mac/CioccolataTest/build/Debug/include -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/DerivedSources/i386 -I/Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/DerivedSources -include /Users/chris/Projects/Mac/CioccolataTest/prefix.pch -c /Users/chris/Projects/Mac/CioccolataTest/main.m -o /Users/chris/Projects/Mac/CioccolataTest/build/CioccolataTest.build/Debug/CioccolataTest.build/Objects-normal/i386/main.o In file included from <command-line>:0: /Users/chris/Projects/Mac/CioccolataTest/prefix.pch:13:35: error: Cioccolata/Cioccolata.h: No such file or directory /Users/chris/Projects/Mac/CioccolataTest/main.m: In function 'main': /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: 'CWHelloWorld' undeclared (first use in this function) /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: (Each undeclared identifier is reported only once /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: for each function it appears in.) /Users/chris/Projects/Mac/CioccolataTest/main.m:13: error: 'hello' undeclared (first use in this function)

    Read the article

  • C# ASP.NET AJAX CascadingDropDown Selected value propriety problem

    - by Eyla
    Greetings, I have a problem to use selected value propriety of CascadingDropDown. I have 3 asp dropdown controls with ajax CascadingDropDown for each one of them. I have no problem to bind data to the 3 CascadingDropDown but my problem is to rebind CascadingDropDown. simply what I want to do is to select a record from Gridview which has the selected values for the CascadingDropDown that I want to pass then rebind the CascadingDropDown with selected value. I'm posting my code down which include: 1-ASP.NET code. 2-Code behind to handle selected record from grid view. 3- web servisice that handle binding data to the 3 CascadingDropDown. please advice how to rebind data to CascadingDropDown with selected value. by the way I used selected value proprety as showning in my code but it is not working and there is no error. Thank you, ........................ ASP.NET code ........................ <%@ Page Title="" Language="C#" MasterPageFile="~/Master.Master" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="IMAM_APPLICATION.WebForm1" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="idcontact_info" DataSourceID="ObjectDataSource1" onselectedindexchanged="GridView1_SelectedIndexChanged"> <Columns> <asp:CommandField ShowSelectButton="True" /> <asp:BoundField DataField="idcontact_info" HeaderText="idcontact_info" InsertVisible="False" ReadOnly="True" SortExpression="idcontact_info" /> <asp:BoundField DataField="Work_Field" HeaderText="Work_Field" SortExpression="Work_Field" /> <asp:BoundField DataField="Occupation" HeaderText="Occupation" SortExpression="Occupation" /> <asp:BoundField DataField="sub_Occupation" HeaderText="sub_Occupation" SortExpression="sub_Occupation" /> </Columns> </asp:GridView> <asp:Label ID="lbl" runat="server" Text="Label"></asp:Label> <asp:ObjectDataSource ID="ObjectDataSource1" runat="server" DeleteMethod="Delete" InsertMethod="Insert" OldValuesParameterFormatString="original_{0}" SelectMethod="GetData" TypeName="IMAM_APPLICATION.DSContactTableAdapters.contact_infoTableAdapter" UpdateMethod="Update"> <DeleteParameters> <asp:Parameter Name="Original_idcontact_info" Type="Int32" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="Work_Field" Type="String" /> <asp:Parameter Name="Occupation" Type="String" /> <asp:Parameter Name="sub_Occupation" Type="String" /> <asp:Parameter Name="Original_idcontact_info" Type="Int32" /> </UpdateParameters> <InsertParameters> <asp:Parameter Name="Work_Field" Type="String" /> <asp:Parameter Name="Occupation" Type="String" /> <asp:Parameter Name="sub_Occupation" Type="String" /> </InsertParameters> </asp:ObjectDataSource> <asp:DropDownList ID="cmbWorkField" runat="server" Style="top: 715px; left: 180px; position: absolute; height: 22px; width: 126px"> </asp:DropDownList> <asp:DropDownList runat="server" ID="cmbOccupation" Style="top: 745px; left: 180px; position: absolute; height: 22px; width: 77px"> </asp:DropDownList> <asp:DropDownList ID="cmbSubOccup" runat="server" style="position:absolute; top: 775px; left: 180px;"> </asp:DropDownList> <cc1:CascadingDropDown ID="cmbWorkField_CascadingDropDown" runat="server" TargetControlID="cmbWorkField" Category="WorkField" LoadingText="Please Wait ..." PromptText="Select Wor kField ..." ServiceMethod="GetWorkField" ServicePath="ServiceTags.asmx"> </cc1:CascadingDropDown> <cc1:CascadingDropDown ID="cmbOccupation_CascadingDropDown" runat="server" TargetControlID="cmbOccupation" Category="Occup" LoadingText="Please wait..." PromptText="Select Occup ..." ServiceMethod="GetOccup" ServicePath="ServiceTags.asmx" ParentControlID="cmbWorkField"> </cc1:CascadingDropDown> <cc1:CascadingDropDown ID="cmbSubOccup_CascadingDropDown" runat="server" Category="SubOccup" Enabled="True" LoadingText="Please Wait..." ParentControlID="cmbOccupation" PromptText="Select Sub Occup" ServiceMethod="GetSubOccup" ServicePath="ServiceTags.asmx" TargetControlID="cmbSubOccup"> </cc1:CascadingDropDown> </asp:Content> ...................................................... C# code behind ...................................................... protected void GridView1_SelectedIndexChanged(object sender, EventArgs e) { string strg = GridView1.SelectedDataKey["idcontact_info"].ToString(); int index = Convert.ToInt32(GridView1.SelectedDataKey["idcontact_info"].ToString()); //txtSearch.Text = GridView1.SelectedIndex.ToString(); // txtSearch.Text = GridView1.SelectedDataKey["idcontact_info"].ToString(); DSContactTableAdapters.contact_infoTableAdapter GetByIDAdapter = new DSContactTableAdapters.contact_infoTableAdapter(); DSContact.contact_infoDataTable ByID = GetByIDAdapter.GetDataByID(index); //DSSearch.contact_infoDataTable FirstName = FirstNameAdapter.GetDataByFirstNameList(prefixText); foreach (DataRow dr in ByID.Rows) { lbl.Text = dr["Work_Field"].ToString() + "....." + dr["Occupation"].ToString() + "....." + dr["sub_Occupation"].ToString(); cmbWorkField_CascadingDropDown.SelectedValue = dr["Work_Field"].ToString(); cmbOccupation_CascadingDropDown.SelectedValue = dr["Occupation"].ToString(); cmbSubOccup_CascadingDropDown.SelectedValue = dr["sub_Occupation"].ToString(); } } ....................................................... web Service ....................................................... [WebMethod] public CascadingDropDownNameValue[] GetWorkField(string knownCategoryValues, string category) { //dsCarsTableAdapters.CarsTableAdapter makeAdapter = new dsCarsTableAdapters.CarsTableAdapter(); //dsCars.CarsDataTable makes = makeAdapter.GetAllCars(); DSContactTableAdapters.tag_work_fieldTableAdapter GetWorkFieldAdapter = new DSContactTableAdapters.tag_work_fieldTableAdapter(); DSContact.tag_work_fieldDataTable WorkFields = GetWorkFieldAdapter.GetDataByGetWorkField(); List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in WorkFields) { string Work_Field = (string)dr["work_Field_name"]; int idtag_work_field = (int)dr["idtag_work_field"]; values.Add(new CascadingDropDownNameValue(Work_Field, idtag_work_field.ToString())); } return values.ToArray(); } [WebMethod] public CascadingDropDownNameValue[] GetOccup(string knownCategoryValues, string category) { StringDictionary kv = CascadingDropDown.ParseKnownCategoryValuesString(knownCategoryValues); int idtag_work_field; if (!kv.ContainsKey("WorkField") || !Int32.TryParse(kv["WorkField"], out idtag_work_field)) { return null; } //dsCarModelsTableAdapters.CarModelsTableAdapter modelAdapter = new dsCarModelsTableAdapters.CarModelsTableAdapter(); //dsCarModels.CarModelsDataTable models = modelAdapter.GetModelsByCarId(makeId); DSContactTableAdapters.tag_OccupTableAdapter GetOccupAdapter = new DSContactTableAdapters.tag_OccupTableAdapter(); DSContact.tag_OccupDataTable Occups = GetOccupAdapter.GetByOccup_ID(idtag_work_field); // List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in Occups) { values.Add(new CascadingDropDownNameValue((string)dr["Occup_Name"], dr["idtag_Occup"].ToString())); } return values.ToArray(); } [WebMethod] public CascadingDropDownNameValue[] GetSubOccup(string knownCategoryValues, string category) { StringDictionary kv = CascadingDropDown.ParseKnownCategoryValuesString(knownCategoryValues); int idtag_Occup; if (!kv.ContainsKey("Occup") || !Int32.TryParse(kv["Occup"], out idtag_Occup)) { return null; } //dsModelColorsTableAdapters.ModelColorsTableAdapter adapter = new dsModelColorsTableAdapters.ModelColorsTableAdapter(); //dsModelColors.ModelColorsDataTable colors = adapter.GetColorsByModelId(colorId); DSContactTableAdapters.tag_Sub_OccupTableAdapter GetSubOccupAdapter = new DSContactTableAdapters.tag_Sub_OccupTableAdapter(); DSContact.tag_Sub_OccupDataTable SubOccups = GetSubOccupAdapter.GetDataBy_Sub_Occup_ID(idtag_Occup); List<CascadingDropDownNameValue> values = new List<CascadingDropDownNameValue>(); foreach (DataRow dr in SubOccups) { values.Add(new CascadingDropDownNameValue((string)dr["Sub_Occup_Name"], dr["idtag_Sub_Occup"].ToString())); } return values.ToArray(); }

    Read the article

  • C sockets, chat server and client, problem echoing back.

    - by wretrOvian
    Hi This is my chat server : #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define LISTEN_Q 20 #define MSG_SIZE 1024 struct userlist { int sockfd; struct sockaddr addr; struct userlist *next; }; int main(int argc, char *argv[]) { // declare. int listFD, newFD, fdmax, i, j, bytesrecvd; char msg[MSG_SIZE], ipv4[INET_ADDRSTRLEN]; struct addrinfo hints, *srvrAI; struct sockaddr_storage newAddr; struct userlist *users, *uptr, *utemp; socklen_t newAddrLen; fd_set master_set, read_set; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // create a user list users = (struct userlist *)malloc(sizeof(struct userlist)); users->sockfd = -1; //users->addr = NULL; users->next = NULL; // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo("localhost", argv[1], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((listFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // bind socket bind(listFD, srvrAI->ai_addr, srvrAI->ai_addrlen); // listen on socket if(listen(listFD, LISTEN_Q) == -1) { perror("* ERROR | listen()\n"); exit(1); } // add listfd to master_set FD_SET(listFD, &master_set); // initialize fdmax fdmax = listFD; while(1) { // equate read_set = master_set; // run select if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()\n"); exit(1); } // query all sockets for(i = 0; i <= fdmax; i++) { if(FD_ISSET(i, &read_set)) { // found active sockfd if(i == listFD) { // new connection // accept newAddrLen = sizeof newAddr; if((newFD = accept(listFD, (struct sockaddr *)&newAddr, &newAddrLen)) == -1) { perror("* ERROR | select()\n"); exit(1); } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&newAddr)->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } fprintf(stdout, "* Client Connected | %s\n", ipv4); // add to master list FD_SET(newFD, &master_set); // create new userlist component utemp = (struct userlist*)malloc(sizeof(struct userlist)); utemp->next = NULL; utemp->sockfd = newFD; utemp->addr = *((struct sockaddr *)&newAddr); // iterate to last node for(uptr = users; uptr->next != NULL; uptr = uptr->next) { } // add uptr->next = utemp; // update fdmax if(newFD > fdmax) fdmax = newFD; } else { // existing sockfd transmitting data // read if((bytesrecvd = recv(i, msg, MSG_SIZE, 0)) == -1) { perror("* ERROR | recv()\n"); exit(1); } msg[bytesrecvd] = '\0'; // find out who sent? for(uptr = users; uptr->next != NULL; uptr = uptr->next) { if(i == uptr->sockfd) break; } // resolve ip if(inet_ntop(AF_INET, &(((struct sockaddr_in *)&(uptr->addr))->sin_addr), ipv4, INET_ADDRSTRLEN) == -1) { perror("* ERROR | inet_ntop()"); exit(1); } // print fprintf(stdout, "%s\n", msg); // send to all for(j = 0; j <= fdmax; j++) { if(FD_ISSET(j, &master_set)) { if(send(j, msg, strlen(msg), 0) == -1) perror("* ERROR | send()"); } } } // handle read from client } // end select result handle } // end looping fds } // end while return 0; } This is my client: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <string.h> #define MSG_SIZE 1024 int main(int argc, char *argv[]) { // declare. int newFD, bytesrecvd, fdmax; char msg[MSG_SIZE]; fd_set master_set, read_set; struct addrinfo hints, *srvrAI; // clear sets FD_ZERO(&master_set); FD_ZERO(&read_set); // clear hints memset(&hints, 0, sizeof hints); // prep hints hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; // get srver info if(getaddrinfo(argv[1], argv[2], &hints, &srvrAI) != 0) { perror("* ERROR | getaddrinfo()\n"); exit(1); } // get a socket if((newFD = socket(srvrAI->ai_family, srvrAI->ai_socktype, srvrAI->ai_protocol)) == -1) { perror("* ERROR | socket()\n"); exit(1); } // connect to server if(connect(newFD, srvrAI->ai_addr, srvrAI->ai_addrlen) == -1) { perror("* ERROR | connect()\n"); exit(1); } // add to master, and add keyboard FD_SET(newFD, &master_set); FD_SET(STDIN_FILENO, &master_set); // initialize fdmax if(newFD > STDIN_FILENO) fdmax = newFD; else fdmax = STDIN_FILENO; while(1) { // equate read_set = master_set; if(select(fdmax+1, &read_set, NULL, NULL, NULL) == -1) { perror("* ERROR | select()"); exit(1); } // check server if(FD_ISSET(newFD, &read_set)) { // read data if((bytesrecvd = recv(newFD, msg, MSG_SIZE, 0)) < 0 ) { perror("* ERROR | recv()"); exit(1); } msg[bytesrecvd] = '\0'; // print fprintf(stdout, "%s\n", msg); } // check keyboard if(FD_ISSET(STDIN_FILENO, &read_set)) { // read data from stdin if((bytesrecvd = read(STDIN_FILENO, msg, MSG_SIZE)) < 0) { perror("* ERROR | read()"); exit(1); } msg[bytesrecvd] = '\0'; // send if((send(newFD, msg, bytesrecvd, 0)) == -1) { perror("* ERROR | send()"); exit(1); } } } return 0; } The problem is with the part where the server recv()s data from an FD, then tries echoing back to all [send() ]; it just dies, w/o errors, and my client is left looping :(

    Read the article

  • How to use Koala Facebook Graph API?

    - by reko
    I am a Rails newbie. I want to use Koala's Graph API. In my controller @graph = Koala::Facebook::API.new('myFacebookAccessToken') @hello = @graph.get_object("my.Name") When I do this, I get something like this { "id"=>"123456", "name"=>"First Middle Last", "first_name"=>"First", "middle_name"=>"Middle", "last_name"=>"Last", "link"=>"http://www.facebook.com/MyName", "username"=>"my.name", "birthday"=>"12/12/1212", "hometown"=>{"id"=>"115200305133358163", "name"=>"City, State"}, "location"=>{"id"=>"1054648928202133335", "name"=>"City, State"}, "bio"=>"This is my awesome Bio.", "quotes"=>"I am the master of my fate; I am the captain of my soul. - William Ernest Henley\r\n\r\n"Don't go around saying the world owes you a living. The world owes you nothing. It was here first.\" - Mark Twain", "work"=>[{"employer"=>{"id"=>"100751133333", "name"=>"Company1"}, "position"=>{"id"=>"105763693332790962", "name"=>"Position1"}, "start_date"=>"2010-08", "end_date"=>"2011-07"}], "sports"=>[{"id"=>"104019549633137", "name"=>"Sport1"}, {"id"=>"103992339636529", "name"=>"Sport2"}], "favorite_teams"=>[{"id"=>"105467226133353743", "name"=>"Fav1"}, {"id"=>"19031343444432369133", "name"=>"Fav2"}, {"id"=>"98027790139333", "name"=>"Fav3"}, {"id"=>"104055132963393331", "name"=>"Fav4"}, {"id"=>"191744431437533310", "name"=>"Fav5"}], "favorite_athletes"=>[{"id"=>"10836600585799922", "name"=>"Fava1"}, {"id"=>"18995689436787722", "name"=>"Fava2"}, {"id"=>"11156342219404022", "name"=>"Fava4"}, {"id"=>"11169998212279347", "name"=>"Fava5"}, {"id"=>"122326564475039", "name"=>"Fava6"}], "inspirational_people"=>[{"id"=>"16383141733798", "name"=>"Fava7"}, {"id"=>"113529011990793335", "name"=>"fava8"}, {"id"=>"112032333138809855566", "name"=>"Fava9"}, {"id"=>"10810367588423324", "name"=>"Fava10"}], "education"=>[{"school"=>{"id"=>"13478880321332322233663", "name"=>"School1"}, "type"=>"High School", "with"=>[{"id"=>"1401052755", "name"=>"Friend1"}]}, {"school"=>{"id"=>"11482777188037224", "name"=>"School2"}, "year"=>{"id"=>"138383069535219", "name"=>"2005"}, "type"=>"High School"}, {"school"=>{"id"=>"10604484633093514", "name"=>"School3"}, "year"=>{"id"=>"142963519060927", "name"=>"2010"}, "concentration"=>[{"id"=>"10407695629335773", "name"=>"c1"}], "type"=>"College"}, {"school"=>{"id"=>"22030497466330708", "name"=>"School4"}, "degree"=>{"id"=>"19233130157477979", "name"=>"c3"}, "year"=>{"id"=>"201638419856163", "name"=>"2011"}, "type"=>"Graduate School"}], "gender"=>"male", "interested_in"=>["female"], "relationship_status"=>"Single", "religion"=>"Religion1", "political"=>"Political1", "email"=>"[email protected]", "timezone"=>-8, "locale"=>"en_US", "languages"=>[{"id"=>"10605952233759137", "name"=>"English"}, {"id"=>"10337617475934611", "name"=>"L2"}, {"id"=>"11296944428713061", "name"=>"L3"}], "verified"=>true, "updated_time"=>"2012-02-24T04:18:05+0000" } How do I show this entire hash in the view in a good format? This is what I did from what ever I learnt.. In my view <% @hello.each do |key, value| %> <li><%=h "#{key.to_s} : #{value.to_s}" %></li> <% end %> This will get the entire thing converted to a list... It works awesome if its just one key.. but how to work with multiple keys and show only the information... something like when it outputs hometown : City, State rather than something like hometown : {"id"=>"115200305133358163", "name"=>"City, State"} Also for education if I just say education[school][name] to display list of schools attended? The error i get is can't convert String into Integer I also tried to do this in my controller, but I get the same error.. @fav_teams = @hello["favorite_teams"]["name"] Also, how can I save all these to the database.. something like just the list of all schools.. not their id no's? Update: The way I plan to save to my database is.. lets say for a user model, i want to save to database as :facebook_id, :facebook_name, :facebook_firstname, ...., :facebook_hometown .. here I only want to save name... when it comes to education.. I want to save.. school, concentration and type.. I have no idea on how to achieve this.. Looking forward for help! thanks!

    Read the article

  • Getting segmentaion fault after destructor

    - by therealsquiggy
    I'm making a small file reading and data validation program as part of my TAFE (a tertiary college) course, This includes checking and validating dates. I decided that it would be best done with a seperate class, rather than integrating it into my main driver class. The problem is that I'm getting a segmentation fault(core dumped) after my test program runs. Near as I can tell, the error occurs when the program terminates, popping up after the destructor is called. So far I have had no luck finding the cause of this fault, and was hoping that some enlightened soul might show me the error of my ways. date.h #ifndef DATE_H #define DATE_H #include <string> using std::string; #include <sstream> using std::stringstream; #include <cstdlib> using std::exit; #include <iostream> using std::cout; using std::endl; class date { public: explicit date(); ~date(); bool before(string dateIn1, string dateIn2); int yearsBetween(string dateIn1, string dateIn2); bool isValid(string dateIn); bool getDate(int date[], string dateIn); bool isLeapYear(int year); private: int days[]; }; #endif date.cpp #include "date.h" date::date() { days[0] = 31; days[1] = 28; days[2] = 31; days[3] = 30; days[4] = 31; days[5] = 30; days[6] = 31; days[7] = 31; days[8] = 30; days[9] = 31; days[10] = 30; days[11] = 31; } bool date::before(string dateIn1, string dateIn2) { int date1[3]; int date2[3]; getDate(date1, dateIn1); getDate(date2, dateIn2); if (date1[2] < date2[2]) { return true; } else if (date1[1] < date2[1]) { return true; } else if (date1[0] < date2[0]) { return true; } return false; } date::~date() { cout << "this is for testing only, plox delete\n"; } int date::yearsBetween(string dateIn1, string dateIn2) { int date1[3]; int date2[3]; getDate(date1, dateIn1); getDate(date2, dateIn2); int years = date2[2] - date1[2]; if (date1[1] > date2[1]) { years--; } if ((date1[1] == date2[1]) && (date1[0] > date2[1])) { years--; } return years; } bool date::isValid(string dateIn) { int date[3]; if (getDate(date, dateIn)) { if (date[1] <= 12) { int extraDay = 0; if (isLeapYear(date[2])) { extraDay++; } if ((date[0] + extraDay) <= days[date[1] - 1]) { return true; } } } else { return false; } } bool date::getDate(int date[], string dateIn) { string part1, part2, part3; size_t whereIs, lastFound; whereIs = dateIn.find("/"); part1 = dateIn.substr(0, whereIs); lastFound = whereIs + 1; whereIs = dateIn.find("/", lastFound); part2 = dateIn.substr(lastFound, whereIs - lastFound); lastFound = whereIs + 1; part3 = dateIn.substr(lastFound, 4); stringstream p1(part1); stringstream p2(part2); stringstream p3(part3); if (p1 >> date[0]) { if (p2>>date[1]) { return (p3>>date[2]); } else { return false; } return false; } } bool date::isLeapYear(int year) { return ((year % 4) == 0); } and Finally, the test program #include <iostream> using std::cout; using std::endl; #include "date.h" int main() { date d; cout << "1/1/1988 before 3/5/1990 [" << d.before("1/1/1988", "3/5/1990") << "]\n1/1/1988 before 1/1/1970 [" << d.before("a/a/1988", "1/1/1970") <<"]\n"; cout << "years between 1/1/1988 and 1/1/1998 [" << d.yearsBetween("1/1/1988", "1/1/1998") << "]\n"; cout << "is 1/1/1988 valid [" << d.isValid("1/1/1988") << "]\n" << "is 2/13/1988 valid [" << d.isValid("2/13/1988") << "]\n" << "is 32/12/1988 valid [" << d.isValid("32/12/1988") << "]\n"; cout << "blerg\n"; } I've left in some extraneous cout statements, which I've been using to try and locate the error. I thank you in advance.

    Read the article

  • Getting segmentation fault after destructor

    - by therealsquiggy
    I'm making a small file reading and data validation program as part of my TAFE (a tertiary college) course, This includes checking and validating dates. I decided that it would be best done with a seperate class, rather than integrating it into my main driver class. The problem is that I'm getting a segmentation fault(core dumped) after my test program runs. Near as I can tell, the error occurs when the program terminates, popping up after the destructor is called. So far I have had no luck finding the cause of this fault, and was hoping that some enlightened soul might show me the error of my ways. date.h #ifndef DATE_H #define DATE_H #include <string> using std::string; #include <sstream> using std::stringstream; #include <cstdlib> using std::exit; #include <iostream> using std::cout; using std::endl; class date { public: explicit date(); ~date(); bool before(string dateIn1, string dateIn2); int yearsBetween(string dateIn1, string dateIn2); bool isValid(string dateIn); bool getDate(int date[], string dateIn); bool isLeapYear(int year); private: int days[]; }; #endif date.cpp #include "date.h" date::date() { days[0] = 31; days[1] = 28; days[2] = 31; days[3] = 30; days[4] = 31; days[5] = 30; days[6] = 31; days[7] = 31; days[8] = 30; days[9] = 31; days[10] = 30; days[11] = 31; } bool date::before(string dateIn1, string dateIn2) { int date1[3]; int date2[3]; getDate(date1, dateIn1); getDate(date2, dateIn2); if (date1[2] < date2[2]) { return true; } else if (date1[1] < date2[1]) { return true; } else if (date1[0] < date2[0]) { return true; } return false; } date::~date() { cout << "this is for testing only, plox delete\n"; } int date::yearsBetween(string dateIn1, string dateIn2) { int date1[3]; int date2[3]; getDate(date1, dateIn1); getDate(date2, dateIn2); int years = date2[2] - date1[2]; if (date1[1] > date2[1]) { years--; } if ((date1[1] == date2[1]) && (date1[0] > date2[1])) { years--; } return years; } bool date::isValid(string dateIn) { int date[3]; if (getDate(date, dateIn)) { if (date[1] <= 12) { int extraDay = 0; if (isLeapYear(date[2])) { extraDay++; } if ((date[0] + extraDay) <= days[date[1] - 1]) { return true; } } } else { return false; } } bool date::getDate(int date[], string dateIn) { string part1, part2, part3; size_t whereIs, lastFound; whereIs = dateIn.find("/"); part1 = dateIn.substr(0, whereIs); lastFound = whereIs + 1; whereIs = dateIn.find("/", lastFound); part2 = dateIn.substr(lastFound, whereIs - lastFound); lastFound = whereIs + 1; part3 = dateIn.substr(lastFound, 4); stringstream p1(part1); stringstream p2(part2); stringstream p3(part3); if (p1 >> date[0]) { if (p2>>date[1]) { return (p3>>date[2]); } else { return false; } return false; } } bool date::isLeapYear(int year) { return ((year % 4) == 0); } and Finally, the test program #include <iostream> using std::cout; using std::endl; #include "date.h" int main() { date d; cout << "1/1/1988 before 3/5/1990 [" << d.before("1/1/1988", "3/5/1990") << "]\n1/1/1988 before 1/1/1970 [" << d.before("a/a/1988", "1/1/1970") <<"]\n"; cout << "years between 1/1/1988 and 1/1/1998 [" << d.yearsBetween("1/1/1988", "1/1/1998") << "]\n"; cout << "is 1/1/1988 valid [" << d.isValid("1/1/1988") << "]\n" << "is 2/13/1988 valid [" << d.isValid("2/13/1988") << "]\n" << "is 32/12/1988 valid [" << d.isValid("32/12/1988") << "]\n"; cout << "blerg\n"; } I've left in some extraneous cout statements, which I've been using to try and locate the error. I thank you in advance.

    Read the article

  • SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008

    - by pinaldave
    Note: Please read the complete post before taking any actions. This blog post would discuss SHRINKFILE and TRUNCATE Log File. The script mentioned in the email received from reader contains the following questionable code: “Hi Pinal, If you could remember, I and my manager met you at TechEd in Bangalore. We just upgraded to SQL Server 2008. One of our jobs failed as it was using the following code. The error was: Msg 155, Level 15, State 1, Line 1 ‘TRUNCATE_ONLY’ is not a recognized BACKUP option. The code was: DBCC SHRINKFILE(TestDBLog, 1) BACKUP LOG TestDB WITH TRUNCATE_ONLY DBCC SHRINKFILE(TestDBLog, 1) GO I have modified that code to subsequent code and it works fine. But, are there other suggestions you have at the moment? USE [master] GO ALTER DATABASE [TestDb] SET RECOVERY SIMPLE WITH NO_WAIT DBCC SHRINKFILE(TestDbLog, 1) ALTER DATABASE [TestDb] SET RECOVERY FULL WITH NO_WAIT GO Configuration of our server and system is as follows: [Removed not relevant data]“ An email like this that suddenly pops out in early morning is alarming email. Because I am a dead, busy mind, so I had only one min to reply. I wrote down quickly the following note. (As I said, it was a single-minute email so it is not completely accurate). Here is that quick email shared with all of you. “Hi Mr. DBA [removed the name] Thanks for your email. I suggest you stop this practice. There are many issues included here, but I would list two major issues: 1) From the setting database to simple recovery, shrinking the file and once again setting in full recovery, you are in fact losing your valuable log data and will be not able to restore point in time. Not only that, you will also not able to use subsequent log files. 2) Shrinking file or database adds fragmentation. There are a lot of things you can do. First, start taking proper log backup using following command instead of truncating them and losing them frequently. BACKUP LOG [TestDb] TO  DISK = N'C:\Backup\TestDb.bak' GO Remove the code of SHRINKING the file. If you are taking proper log backups, your log file usually (again usually, special cases are excluded) do not grow very big. There are so many things to add here, but you can call me on my [phone number]. Before you call me, I suggest for accuracy you read Paul Randel‘s two posts here and here and Brent Ozar‘s Post here. Kind Regards, Pinal Dave” I guess this post is very much clear to you. Please leave your comments here. As mentioned, this is a very huge subject; I have just touched a tip of the ice-berg and have tried to point to authentic knowledge. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • This Week in Geek History: Gmail Goes Public, Deep Blue Wins at Chess, and the Birth of Thomas Edison

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the week in Geek History. This week we’re taking a peek at the public release of Gmail, the first time a computer won against a chess champion, and the birth of prolific inventor Thomas Edison. Gmail Goes Public It’s hard to believe that Gmail has only been around for seven years and that for the first three years of its life it was invite only. In 2007 Gmail dropped the invite only requirement (although they would hold onto the “beta” tag for another two years) and opened its doors for anyone to grab a username @gmail. For what seemed like an entire epoch in internet history Gmail had the slickest web-based email around with constant innovations and features rolling out from Gmail Labs. Only in the last year or so have major overhauls at competitors like Hotmail and Yahoo! Mail brought other services up to speed. Can’t stand reading a Week in Geek History entry without a random fact? Here you go: gmail.com was originally owned by the Garfield franchise and ran a service that delivered Garfield comics to your email inbox. No, we’re not kidding. Deep Blue Proves Itself a Chess Master Deep Blue was a super computer constructed by IBM with the sole purpose of winning chess matches. In 2011 with the all seeing eye of Google and the amazing computational abilities of engines like Wolfram Alpha we simply take powerful computers immersed in our daily lives for granted. The 1996 match against reigning world chest champion Garry Kasparov where in Deep Blue held its own, but ultimately lost, in a  4-2 match shook a lot of people up. What did it mean if something that was considered such an elegant and quintessentially human endeavor such as chess was so easy for a machine? A series of upgrades helped Deep Blue outright win a match against Kasparov in 1997 (seen in the photo above). After the win Deep Blue was retired and disassembled. Parts of Deep Blue are housed in the National Museum of History and the Computer History Museum. Birth of Thomas Edison Thomas Alva Edison was one of the most prolific inventors in history and holds an astounding 1,093 US Patents. He is responsible for outright inventing or greatly refining major innovations in the history of world culture including the phonograph, the movie camera, the carbon microphone used in nearly every telephone well into the 1980s, batteries for electric cars (a notion we’d take over a century to take seriously), voting machines, and of course his enormous contribution to electric distribution systems. Despite the role of scientist and inventor being largely unglamorous, Thomas Edison and his tumultuous relationship with fellow inventor Nikola Tesla have been fodder for everything from books, to comics, to movies, and video games. Other Notable Moments from This Week in Geek History Although we only shine the spotlight on three interesting facts a week in our Geek History column, that doesn’t mean we don’t have space to highlight a few more in passing. This week in Geek History: 1971 – Apollo 14 returns to Earth after third Lunar mission. 1974 – Birth of Robot Chicken creator Seth Green. 1986 – Death of Dune creator Frank Herbert. Goodnight Dune. 1997 – Simpsons becomes longest running animated show on television. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia. Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO] The Lingering Glow of Sunset over a Winter Landscape Wallpaper

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Windows Presentation Foundation 4.5 Cookbook Review

    - by Ricardo Peres
    As promised, here’s my review of Windows Presentation Foundation 4.5 Cookbook, that Packt Publishing kindly made available to me. It is an introductory book, targeted at WPF newcomers or users with few experience, following the typical recipes or cookbook style. Like all Packt Publishing books on development, each recipe comes with sample code that is self-sufficient for understanding the concepts it tries to illustrate. It starts on chapter 1 by introducing the most important concepts, the XAML language itself, what can be declared in XAML and how to do it, what are dependency and attached properties as well as markup extensions and events, which should give readers a most required introduction to how WPF works and how to do basic stuff. It moves on to resources on chapter 2, which also makes since, since it’s such an important concept in WPF. Next, chapter 3, come the panels used for laying controls on the screen, all of the out of the box panels are described with typical use cases. Controls come next in chapter 4; the difference between elements and controls is introduced, as well as content controls, headered controls and items controls, and all standard controls are introduced. The book shows how to change the way they look by using templates. The next chapter, 5, talks about top level windows and the WPF application object: how to access startup arguments, how to set the main window, using standard dialogs and there’s even a sample on how to have a irregularly-shaped window. This is one of the most important concepts in WPF: data binding, which is the theme for the following chapter, 6. All common scenarios are introduced, the binding modes, directions, triggers, etc. It talks about the INotifyPropertyChanged interface and how to use it for notifying data binding subscribers of changes in data sources. Data templates and selectors are also covered, as are value converters and data triggers. Examples include master-detail and sorting, grouping and filtering collections and binding trees and grids. Last it covers validation rules and error templates. Chapter 7 talks about the current trend in WPF development, the Model View View-Model (MVVM) framework. This is a well known pattern for connecting things interface to actions, and it is explained competently. A typical implementation is presented which also presents the command pattern used throughout WPF. A complete application using MVVM is presented from start to finish, including typical features such as undo. Style and layout is covered on chapter 8. Why/how to use styles, applying them automatically,  using the many types of triggers to change styles automatically, using Expression Blend behaviors and templates are all covered. Next chapter, 9, is about graphics and animations programming. It explains how to create shapes, transform common UI elements, apply special effects and perform simple animations. The following chapter, 10, is about creating custom controls, either by deriving from UserControl or from an existing control or framework element class, applying custom templates for changing the way the control looks. One useful example is a custom layout panel that arranges its children along a circumference. The final chapter, 11, is about multi-threading programming and how one can integrate it with WPF. Includes how to invoke methods and properties on WPF classes from threads other than the main UI, using background tasks and timers and even using the new C# 5.0 asynchronous operations. It’s an interesting book, like I said, mostly for newcomers. It provides a competent introduction to WPF, with examples that cover the most common scenarios and also give directions to more complex ones. I recommend it to everyone wishing to learn WPF.

    Read the article

  • Oracle SQL Developer Data Modeler: What Tables Aren’t In At Least One SubView?

    - by thatjeffsmith
    Organizing your data model makes the information easier to consume. One of the organizational tools provided by Oracle SQL Developer Data Modeler is the ‘SubView.’ In a nutshell, a SubView is a subset of your model. The Challenge: I’ve just created a model which represents my entire ____________ application. We’ll call it ‘residential lending.’ Instead of having all 100+ tables in a single model diagram, I want to break out the tables by module, e.g. appraisals, credit reports, work histories, customers, etc. I’ve spent several hours breaking out the tables to one or more SubViews, but I think i may have missed a few. Is there an easy way to see what tables aren’t in at least ONE subview? The Answer Yes, mostly. The mostly comes about from the way I’m going to accomplish this task. It involves querying the SQL Developer Data Modeler Reporting Schema. So if you don’t have the Reporting Schema setup, you’ll need to do so. Got it? Good, let’s proceed. Before you start querying your Reporting Schema, you might need a data model for the actual reporting schema…meta-meta data! You could reverse engineer the data modeler reporting schema to a new data model, or you could just reference the PDFs in \datamodeler\reports\Reporting Schema diagrams directory. Here’s a hint, it’s THIS one The Query Well, it’s actually going to be at least 2 queries. We need to get a list of distinct designs stored in your repository. For giggles, I’m going to get a listing including each version of the model. So I can query based on design and version, or in this case, timestamp of when it was added to the repository. We’ll get that from the DMRS_DESIGNS table: SELECT DISTINCT design_name, design_ovid, date_published FROM DMRS_designs Then I’m going to feed the design_ovid, down to a subquery for my child report. select name, count(distinct diagram_id) from DMRS_DIAGRAM_ELEMENTS where design_ovid = :dESIGN_OVID and type = 'Table' group by name having count(distinct diagram_id) < 2 order by count(distinct diagram_id) desc Each diagram element has an entry in this table, so I need to filter on type=’Table.’ Each design has AT LEAST one diagram, the master diagram. So any relational table in this table, only having one listing means it’s not in any SubViews. If you have overloaded object names, which is VERY possible, you’ll want to do the report off of ‘OBJECT_ID’, but then you’ll need to correlate that to the NAME, as I doubt you’re so intimate with your designs that you recognize the GUIDs So I’m going to cheat and just stick with names, but I think you get the gist. My Model Of my almost 90 tables, how many of those have I not added to at least one SubView? Now let’s run my report! Voila! My ‘BEER2′ table isn’t in any SubView! It says ’1′ because the main model diagram counts as a view. So if the count came back as ’2′, that would mean the table was in the main model diagram and in 1 SubView diagram. And I know what you’re thinking, what kind of residential lending program would have a table called ‘BEER2?’ Let’s just say, that my business model has some kinks to work out!

    Read the article

  • SQL SERVER – Error: Fix – Msg 208 – Invalid object name ‘dbo.backupset’ – Invalid object name ‘dbo.backupfile’

    - by pinaldave
    Just a day before I got a very interesting email. Here is the email (modified a bit to make it relevant to this blog post). “Pinal, We are facing a very strange issue. One of our query  related to backup files and backup set has stopped working suddenly in SSMS. It works fine in application where we have and in the stored procedure but when we have it in our SSMS it gives following error. Msg 208, Level 16, State 1, Line 1 Invalid object name ‘dbo.backupfile’. Here are our queries which we are trying to execute. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; This query gives us details related to backupset and backup files when the backup was taken.” When I receive this kind of email, usually I have no answers directly. The claim that it works in stored procedure and in application but not in SSMS gives me no real data. I have requested him to very first check following two things: If he is connected to correct server? His answer was yes. If he has enough permissions? His answer was he was logged in as an admin. This means there was something more to it and I requested him to send me a screenshot of the his SSMS. He promptly sends that to me and as soon as I receive the screen shot I knew what was going on. Before I say anything take a look at the screenshot yourself and see if you can figure out why his queries are not working in SSMS. Just to make your life a bit easy, I have already given a hint in the image. The answer is very simple, the context of the database is master database. To execute above two queries the context of the database has to be msdb. Tables backupset and backupfile belong to the database msdb only. Here are two workaround or solution to above problem: 1) Change context to MSDB Above two queries when they will run as following they will not error out and will give the accurate desired result. USE msdb GO SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; 2) Prefix the query with msdb There are cases above script used in stored procedure or part of big query, it is not possible to change the context of the whole query to any specific database. Use three part naming convention and prefix them with msdb. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM msdb.dbo.backupset; SELECT logical_name, backup_size, file_type FROM msdb.dbo.backupfile; Very simple solution but sometime keeps people wondering for an answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >