Search Results

Search found 22947 results on 918 pages for 'xml schema collection'.

Page 61/918 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • simple XML question for perl - how to retrieve specific elements

    - by Jeff
    I'm trying to figure out how to loop through XML but I've read a lot and I'm still getting stuck. Here's the info: I'm using the wordnik api to retrieve XML with XML::Simple: $content = get($url); $r = $xml->XMLin("$content"); The actual XML looks like this: <definitions> - <definition sequence="0" id="0"> - <text> To withdraw one's support or help from, especially in spite of duty, allegiance, or responsibility; desert: abandon a friend in trouble. </text> <headword>abandon</headword> <partOfSpeech>verb-transitive</partOfSpeech> </definition> - <definition sequence="1" id="0"> - <text> To give up by leaving or ceasing to operate or inhabit, especially as a result of danger or other impending threat: abandoned the ship. </text> <headword>abandon</headword> <partOfSpeech>verb-transitive</partOfSpeech> </definition> - <definition sequence="2" id="0"> - <text> To surrender one's claim to, right to, or interest in; give up entirely. See Synonyms at relinquish. </text> <headword>abandon</headword> <partOfSpeech>verb-transitive</partOfSpeech> </definition> - <definition sequence="3" id="0"> ... What I want is simply the FIRST definition's part of speech. I'm using this code but it's getting the LAST definition's POS: if($r->{definition}->{0}->{partOfSpeech}) { $pos = $r->{definition}->{0}->{partOfSpeech}; } else { $pos = $r->{definition}->{partOfSpeech}; } I am pretty embarrassed by this since I know there's an obviously better way to do it. I would love to get something as simple as this working so I could more generally loop through the elements. BUt it just isn't working for me (no idea what to reference). I've tried many variations of the following - this is just my last attempt: while (my ($k, $v) = each %{$r->{definitions}->{definition}[0]->{sequence}->{partOfSpeech}}) { $v =~ s/'/'"'"'/g; $v = "'$v'"; print "export $k=$v\n"; } Lastly, when I do "print Dumper($r)" it gives me this: $VAR1 = { 'definition' => { '0' => { 'partOfSpeech' => 'noun', 'sequence' => '6', 'text' => 'A complete surrender of inhibitions.', 'headword' => 'abandon' } } }; (And that "noun" you see is the last (6th) definition/partofspeech element).

    Read the article

  • reorder XML elements or set an explicit template with XSLT

    - by Sash
    I tried the solution in my previous question (flattening XML to load via SSIS package), however this isn't working. I now know what I need to do, however I need some guidance on how to do it. So say I have the following XML structure: <person id="1"> <name>John</name> <surname>Smith</surname> <age>25</age> <comment> <comment_id>1</comment_id> <comment_text>Hello</comment_text> </comment> <comment> <comment_id>2</comment_id> <comment_text>Hello again!</comment_text> </comment> <somethingelse> <id>1</id> </somethingelse> <comment> <comment_id>3</comment_id> <comment_text>Third Item</comment_text> </comment> </person> <person id="2"> <name>John</name> <surname>Smith</surname> <age>25</age> <somethingelse> <id>1</id> </somethingelse> </person> ... ... If I am to load this into a SSIS package, as an XML source, what I will essentially get is a table created for each element, as opposed to get a structured table output such as person table (name, surname, age) somethingelse table (id) comment table (comment_id, comment_text) What I end up getting is: person table (person_Id <-- internal SSIS id) name table surname table age table person_name table person_surname table person_comment_comment_id table etc... What I found was that if each element and all inner elements are not in the same format and consistency, i will get the above anomaly which makes it rather complex especially if I am dealing with 80 - 100+ columns. Unfortunately I have no way of modifying the system (Lotus Notes) that produces these reports, so I was wondering whether I may be able to explicitly have an XSLT template that will be able to align each person sub elements (and the sub collection elements such as comments ? Unless there is a quicker way to realign all inner elements. Seems that SSIS XML source requires a very consistent XML file in the sense of: if the name element is in position 1, then all subsequent name elements within person parent have to be in position 1. SSIS seems to pickup the inconsistencies if there are certain elements missing from one parent to another, however, if their ordering is not right (A, B, C)(A, B, C)(A,C,B), it will chuck a massive fuss! All help is appreciated! Thank you in advance.

    Read the article

  • XML Outputting - PHP vs JS vs Anything Else?

    - by itsphil
    Hi everyone, I am working on developing a Travel website which uses XML API's to get the data. However i am relatively new to XML and outputting it. I have been experimenting with using PHP to output a test XML file, but currently the furthest iv got is to only output a few records. As it the questions states i need to know which technology will be best for this project. Below iv included some points to take into consideration. The website is going to be a large sized, heavy traffic site (expedia/lastminute size) My skillset is PHP (intermediate/high skilled) & Javascript (intermediate/high skilled) Below is an example of the XML that the API is outputting: <?xml version="1.0"?> <response method="###" success="Y"> <errors> </errors> <request> <auth password="test" username="test" /> <method action="###" sitename="###" /> </request> <results> <line id="6" logourl="###" name="Line 1" smalllogourl="###"> <ships> <ship id="16" name="Ship 1" /> <ship id="453" name="Ship 2" /> <ship id="468" name="Ship 3" /> <ship id="356" name="Ship 4" /> </ships> </line> <line id="63" logourl="###" name="Line 2" smalllogourl="###"> <ships> <ship id="492" name="Ship 1" /> <ship id="454" name="Ship 2" /> <ship id="455" name="Ship 3" /> <ship id="421" name="Ship 4" /> <ship id="401" name="Ship 5" /> <ship id="404" name="Ship 6" /> <ship id="405" name="Ship 7" /> <ship id="406" name="Ship 8" /> <ship id="407" name="Ship 9" /> <ship id="408" name="Ship 10" /> </ships> </line> <line id="41" logourl="###"> <ships> <ship id="229" name="Ship 1" /> <ship id="230" name="Ship 2" /> <ship id="231" name="Ship 3" /> <ship id="445" name="Ship 4" /> <ship id="570" name="Ship 5" /> <ship id="571" name="Ship 6" /> </ships> </line> </results> </response> If possible when suggesting which technlogy is best for this project, if you could provide some getting started guides or any information would be very much appreciated. Thank you for taking the time to read this.

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

  • parsing xml with php, children

    - by moustafa
    Hello I successfully created my parser Everything is working great except one thing since my xml is formated a little different and I am totally lost on how to assign variable to the children of . xml portion <item> <url /> <name /> - <photos> <photo>1020944_0.jpg</photo> <photo>1020944_1.jpg</photo> <photo>1020944_2.jpg</photo> </photos> <user_id /> </item> PHP code <? global $insideitem, $tag, $name, $photos, $user_id; global $count,$db; $db = mysql_connect("localhost", "user","pass"); mysql_select_db("db_name",$db); $result = mysql_query("SELECT user_id FROM table,$db); while ($myrow = mysql_fetch_array($result)){ $uid=$myrow['user_id']; $UN_ID[$uid]=$uid; } $count=1; $count2=1; // ########################################################## // ************* START ELEMENT FUNCTION ********************* // ########################################################## function startElement($parser, $name, $attrs) { global $insideitem, $tag, $name, $photos, $user_id; if ($insideitem) { $tag = $name; } elseif($name == "ITEM"){ $insideitem = true; } } function endElement($parser, $name) { global $insideitem, $tag, $name, $photos, $user_id; global $count,$count2,$db,$UN_ID; if ($name == "ITEM") { if(!$UN_ID[$unique_id]){ $name=addslashes($name); $photo1=addslashes($photo); $photo2=addslashes($photo); $photo3=addslashes($photo); $photo4=addslashes($photo); $user_id=addslashes($category); $sql = "INSERT INTO table ( name, photo1, photo2, photo3, photo4, user_id ) VALUES ( '$name', '$photo', '$photo', '$photo', '$photo', '$user_id', )"; $resultupdate = mysql_query($sql); } $name=''; $photos=''; $user_id=''; } } function characterData($parser, $data) { global $insideitem, $tag, $name, $photos, $user_id; if ($insideitem) { switch ($tag) { case "NAME": $name .= $data; break; case "PHOTOS": $photos .= $data; break; case "USER_ID": $user_id .= $data; break; } } } $xml_parser = xml_parser_create(); xml_set_element_handler($xml_parser, "startElement", "endElement"); xml_set_character_data_handler($xml_parser, "characterData"); $fp = fopen("../myfile.xml","r") or die("Error reading RSS data."); while ($data = fread($fp, 4096)) // Parse each 4KB chunk with the XML parser created above xml_parse($xml_parser, $data, feof($fp)) // Handle errors in parsing or die(sprintf("XML error: %s at line %d", xml_error_string(xml_get_error_code($xml_parser)), xml_get_current_line_number($xml_parser))); fclose($fp); // ########################################################## // *********************** FREE MEMORY ********************** // ########################################################## xml_parser_free($xml_parser); ?> The number of tags can range between 1-4. I have tried searching everywhere for info on how to do this and tried everything but I just cant get it. After several days of this giving me headaches I really hope some one can enlighten me.

    Read the article

  • Cannot add namespace prefix to children using XSL

    - by Erdal
    I checked many answers here and I think I am almost there. One thing that is bugging me (and for some reason my peer needs it) follows: I have the following input XML: <?xml version="1.0" encoding="utf-8"?> <MyRoot> <MyRequest CompletionCode="0" CustomerID="9999999999"/> <List TotalList="1"> <Order CustomerID="999999999" OrderNo="0000000001" Status="Shipped"> <BillToAddress ZipCode="22221"/> <ShipToAddress ZipCode="22222"/> <Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </Order> </List> <Errors/> </MyRoot> I was asked to produce this: <ns:MyNewRoot xmlns:ns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <N1:MyRequest CompletionCode="0" CustomerID="9999999999"/> <ns:List TotalList="1"> <N2:Order CustomerID="999999999" Level="Preferred" Status="Shipped"> <N2:BillToAddress ZipCode="22221"/> <N2:ShipToAddress ZipCode="22222"/> <N2:Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </N2:Order> </ns:List> <ns:Errors/> </ns:MyNewRoot> Note the children of the N2:Order also needs N2: prefix as well as the ns: prefix for the rest of the elements. I use the XSL transformation below: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:template match="@* | node()"> <xsl:copy> <xsl:apply-templates select="@* | node()"/> </xsl:copy> </xsl:template> <xsl:template match="/MyRoot"> <MyNewRoot xmlns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <xsl:apply-templates/> </MyNewRoot> </xsl:template> <xsl:template match="/MyRoot/MyRequest"> <xsl:element name="N1:{name()}" namespace="http://schemas.foo.com/request"> <xsl:copy-of select="namespace::*"/> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> <xsl:template match="/MyRoot/List/Order"> <xsl:element name="N2:{name()}" namespace="http://schemas.foo.com/details"> <xsl:copy-of select="namespace::*"/> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> </xsl:stylesheet> This one doesn't process the ns (I couldn't figure this out). When I process thru the above the XSL transformation with AltovaXML I end up with below: <MyNewRoot xmlns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <N1:MyRequest CompletionCode="0" CustomerID="9999999999"/> <List xmlns="" TotalList="1"> <N2:Order CustomerID="999999999" Level="Preferred" Status="Shipped"> <BillToAddress ZipCode="22221"/> <ShipToAddress ZipCode="22222"/> <Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </N2:Order> </List> <Errors/> </MyNewRoot> Note that N2: prefix for the children of Order is not there after the XSL transformation. Also additional xmlns="" in the Order header (for some reason). I couldn't figure out putting the ns: prefix for the rest of the elements (like Errors and List). First of all, why would I need to put the prefix for the children if the parent already has it. Doesn't the parent namespace dictate the children nodes/attribute namespaces? Secondly, I want to add the prefixes in the above XML as expected, how can I do that with XSL?

    Read the article

  • What does a well formed XML or Schema look like for InfoPath form creation?

    - by Keith Sirmons
    Howdy, Are there any resources out there that defines how a well formed XML or Schema may look like for InfoPath? When designing a new form, there is an option to base the new form off of an existing XML document or XML Schema as the data source. I am looking for any guidelines or rules that will help me make sure the structure of the XML file I use to create the form will work the best it could work. I am creating the XML structure for another project, but we want to make sure the XML that is created would be InfoPath friendly for possible future applications. Thank you, Keith

    Read the article

  • XML structure question

    - by Andrew Jahn
    I have a basic XML object that I'm working with. I can't figure out how to access the parts of it. EX:<br> <pre><code> < FacetsData> < Collection name="CDDLALL" type="Group"> < SubCollection name="CDDLALL" type="Row"> < Column name="DPDP_ID">D0230< /Column> < Column name="Count">9< /Column> < /SubCollection> < SubCollection name="CDDLALL" type="Row"> < Column name="DPDP_ID">D1110< /Column> < Column name="Count">9< /Column> < /SubCollection> < /Collection> < /FacetsData> (PS: I can't get this damn xml to format so people can read it) What I need to do is check each DPDP_ID and if its value is D0230 then I leave the Count alone, all else I change the Count to 1. What I have so far: node = doc.DocumentElement; nodeList = node.SelectNodes("/FacetsData/Collection/SubCollection"); for (int x = 0; x < nodeList.Count; x++) { if (nodeList[x].HasChildNodes) { for (int i = 0; i < nodeList[x].ChildNodes.Count; i++) { //This part I can't figure out how to get the name="" part of the xml //MessageBox.Show(oNodeList[x].ChildNodes[i].InnerText); get the "D0230","1" //part but not the "DPDP_ID","Count" part. } } }

    Read the article

  • DataSets and XML - The Simplistic Approach

    One of the first ways I learned how to read xml data from external data sources was by using a DataSet’s ReadXML function. This function takes file path for an XML document and then converts it to a Dataset. This functionality is great when you need a simple way to process an XML document.  In addition the DataSet object also offers a simple way to save data in an xml format by using the WriteXML function. This function saves the current data in the DataSet to an XML file to be used later. DataSet ds  = New DataSet();String filePath = “http://www.yourdomain.com/someData.xml”;String fileSavePath = “C:\Temp\Test.xml”//Read file for this locationds.readxml(filePath);//Save file to this locationds.writexml(fileSavePath); I have used the ReadXML function before when consuming data from external Rss feeds to display on one of my sites.  It allows me to quickly pull in data from external sites with little to no processing. Example site: MyCreditTech.com

    Read the article

  • Automatically minifying attribute/element names when using XmlSerializer

    - by frou
    When serializing a C# class using XmlSerializer, the attributes/elements representing the properties of the class will have the same names as they do in the source code. I know you can override this by doing like so: [XmlAttribute("num")] public int NumberOfThingsThatAbcXyz { get; set; } I'd like the generated XML for my classes to be as compact as possible, but obviously still capable of being automatically deserialized on the other side. Is there a way to have these names minified as much as possible without having to manually think of and annotate everything with a short string? The resultant XML being easily human readable isn't a concern.

    Read the article

  • How to set xmlns when serializing object in c#

    - by John
    I am serializing an object in my ASP.net MVC program to an xml string like this; StringWriter sw = new StringWriter(); XmlSerializer s = new XmlSerializer(typeof(mytype)); s.Serialize(sw, myData); Now this give me this as the first 2 lines; <?xml version="1.0" encoding="utf-16"?> <GetCustomerName xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> my question is, How can I change the xmlns and the encoding type, when serializing? Thanks

    Read the article

  • data is not getting ordered by date

    - by raging_boner
    Can't get list sorted by date. How to show data ordered by date? XDocument doc = XDocument.Load(Server.MapPath("file.xml")); IEnumerable<XElement> items = from item in doc.Descendants("item") orderby Convert.ToDateTime(item.Attribute("lastChanges").Value) descending where item.Attribute("x").Value == 1 select item; Repeater1.DataSource = items; Repeater1.DataBind(); Xml file looks like this: <root> <items> <item id="1" lastChanges="15-05-2010" x="0" /> <item id="2" lastChanges="16-05-2010" x="1" /> <item id="3" lastChanges="17-05-2010" x="1" /> </items> </root>

    Read the article

  • xmlns='' was not expected when deserializing nested classes

    - by Mavrik
    I have a problem when attempting to serialize class on a server, send it to the client and deserialize is on the destination. On the server I have the following two classes: [XmlRoot("StatusUpdate")] public class GameStatusUpdate { public GameStatusUpdate() {} public GameStatusUpdate(Player[] players, Command command) { this.Players = players; this.Update = command; } [XmlArray("Players")] public Player[] Players { get; set; } [XmlElement("Command")] public Command Update { get; set; } } and [XmlRoot("Player")] public class Player { public Player() {} public Player(PlayerColors color) { Color = color; ... } [XmlAttribute("Color")] public PlayerColors Color { get; set; } [XmlAttribute("X")] public int X { get; set; } [XmlAttribute("Y")] public int Y { get; set; } } (The missing types are all enums). This generates the following XML on serialization: <?xml version="1.0" encoding="utf-16"?> <StatusUpdate> <Players> <Player Color="Cyan" X="67" Y="32" /> </Players> <Command>StartGame</Command> </StatusUpdate> On the client side, I'm attempting to deserialize that into following classes: [XmlRoot("StatusUpdate")] public class StatusUpdate { public StatusUpdate() { } [XmlArray("Players")] [XmlArrayItem("Player")] public PlayerInfo[] Players { get; set; } [XmlElement("Command")] public Command Update { get; set; } } and [XmlRoot("Player")] public class PlayerInfo { public PlayerInfo() { } [XmlAttribute("X")] public int X { get; set; } [XmlAttribute("Y")] public int Y { get; set; } [XmlAttribute("Color")] public PlayerColor Color { get; set; } } However, the deserializer throws an exception: There is an error in XML document (2, 2). <StatusUpdate xmlns=''> was not expected. What am I missing or doing wrong?

    Read the article

  • How to read from an XmlReader without moving it forwards

    - by andy
    hey guys, I've got this scenario: while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == itemElementName) { XElement item = null; try { item = XElement.ReadFrom(reader) as XElement; } catch (XmlException ex) { //log line number and stuff from XmlException class } } } In the above loop I'm transforming a certain node (itemElementName) into an XElement. Some nodes will be good XML and will go into an XElement, however, some will not. In the CATCH, I'd like to now only catch the standard XmlException stuff... I'd also like to catch an extract of the current Xml and a string. However, if I do any kind of READ operation on the node before I pass it to the XElement, it moves the reader forward. How can get a "snapshot" of the contents of the OuterXml of the reader without interfering with it's position?

    Read the article

  • get attributes from xml tree using linq

    - by nelsonwebs
    I'm working with an xml file that looks like this: <?xml version="1.0" encoding="UTF-8"?> <element1 xmlns="http://namespace1/"> <element2> <element3> <element4 attr1="2009-11-09"> <element5 attr2="NAME1"> <element6 attr3="1"> <element7 attr4="1" attr5="5.5" attr6="3.4"/> </element6> </element5> <element5 attr2="NAME2"> <element6 attr3="1"> <element7 attr4="3" attr5="4" attr6="4.5"/> </element6> </element5> </element4> </element3> </element2> </element1> Where I need to loop through element5 and retrieve the attributes in an Ienumberable like this: attr1, attr2, attr3, attr4, attr5, attr6 using linq to xml and c#. I can loop through the element5 and get all the attribute2 info using but I can't figure out how to get the parent or child attributes I need. UPDATE: Thanks for the feeback thus far. For clarity, I need to do a loop through attribute5. So basically, what I have right now (which isn't much) is . . . XElement xel = XElement.Load(xml); IEnumberable<XElement> cList = from el in xel.Elements(env + "element2").Element (n2 + "element3").Elements(n2 + "element4").Elements(ns + "element5") select el; foreach (XElement e in cList) Console.WriteLine(e.Attribute("attr2").Value.ToString()); This will give me the value all the attr 2 in the loop but I could be going about this all wrong for what I'm trying to acheive. I also need to collect the other attributes mentioned above in a collection (the Console reference is just me playing with this right now but the end result I need is a collection). So the end results would be a collection like attr1, attr2, attr3, attr4, attr5, attr6 2009-11-09, name1, 1, 1, 5.5, 3.4 2009-11-09, name2, 1, 3, 4, 4.5 Make Sense?

    Read the article

  • What is the preferred way to update database schemas in multiple production environments

    - by rmarimon
    I am about to install some 20 servers with the same web application in multiple locations connected to their own local database. I will be updating the web applications remotely (perhaps using debian's package manager) and I'm sure will eventually need to update the database schemas. Since each server could be eventually be using a different release of the web application, I need a way to apply the incremental changes to the servers. I'm thinking something like this. Let's start with database.schema.1 as the original release of the database and assume this number increases with each new version of the schema. I eventually could end up with database.schema.17 as the current release. For a new installation this would be the schema to install. It seems to me that I would need consecutive translations like database.translation.1.2 which would convert database.schema.1 into database.schema.2, database.translation.2.3 to convert from 2 to 3 and so on until 17. It seems that whenever I change a schema I need to alter the database but perhaps I need to run some script to update the data which might be done with SQL but might require an external non sql script. What is the appropriate way to organize all these files? What is the automatic way to apply those upgrades to the schema? Where do I store the current version number of the schema?

    Read the article

  • SqlBuildTask failed due to ArgumentNullException(searchingPaths)

    At one of my customers, they have setup TFS 2010. They are using the UpgradeTemplate.xaml to build all their solutions, including GDR2 database projects. When building the project, I got the following error message DspBuild:   Creating a model to represent the project... C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: The "SqlBuildTask" task failed unexpectedly. [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: System.ArgumentNullException: Value cannot be null. [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: Parameter name: searchingPaths [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionAssemblyResolver..ctor(List`1 searchingPaths) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionTypeLoader.LoadTypes() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionManager..ctor(String databaseSchemaProviderType) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.TaskHostLoader.LoadImpl(ITaskHost providedHost, TaskLoggingHelper providedLogger) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.TaskHostLoader.Load(ITaskHost providedHost, TaskLoggingHelper providedLogger) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.DBBuildTask.Execute() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] Solution To solve this error you set the MSBuild Platform in the Build Defintion to X86:

    Read the article

  • Desktop Fun: Winter 2012 Wallpaper Collection [Bonus Size]

    - by Asian Angel
    The frostiest time of year is here once again and we have the perfect collection of snowy backgrounds for your favorite computer. Turn your desktop into a winter wonderland with our Winter 2012 Wallpaper collection. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • SQL SERVER Configure Management Data Collection in Quick Steps T-SQL Tuesday #005

    This article was written as a response to T-SQL Tuesday #005 Reporting.The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • C# WPF fill datagrid or listview from XML file

    - by Dave
    I have a well formed XML file I would like to fill a datagrid with. I would prefer using the AutoGenerate feature of WFPToolKit datagrid, but could hard code the columns in. The trouble I am having is getting the xml file contents into a datagrid. I had it partially working in a listview, but think a datagrid would be more suited for my needs. Can anyone provide a quick example of how to accomplish this? Thanks!

    Read the article

  • Using returned XML from an authorised HTTP request in vb.NET

    - by Nathan
    Hi, How can I use the returned XML from the reader in a xmltextreader? ' Create the web request request = DirectCast(WebRequest.Create("https://mobilevikings.com/api/1.0/rest/mobilevikings/sim_balance.xml"), HttpWebRequest) ' Add authentication to request request.Credentials = New NetworkCredential("username", "password") ' Get response response = DirectCast(request.GetResponse(), HttpWebResponse) ' Get the response stream into a reader reader = New StreamReader(response.GetResponseStream()) Thanks in advance, Nathan.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >