Search Results

Search found 41565 results on 1663 pages for 'sql xml'.

Page 46/1663 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • SQL Server 2008 R2 Installation and the Phantom of SQL Server 2005 Express

    - by Davide Mauri
    Today I’ve happy started to install SQL Server 2008R2 on my development machine, which has this software installed Windows Server 2008 R2 Standard SQL Server 2008 SP1 CU5 Visual Studio 2008 SP1 BOL October 2009 AdventuresWorks2008 Databases SR4 Visual Studio 2010 RTM So, all the basic standard stuff. SQL Server 2008 R2 installation went smooth ‘till somewhere in the middle, where the rule engine checks that software pre-requisite are satisfied before starting to copy files. Here I had this @][@@[?!?! error: “The SQL Server 2005 Express Tools are installed. To continue, remove the SQL Server 2005 Express Tools.” Fun enough, I don’t have and I’ve never had SQL Server 2005 Express on my machine. Armed with patience I analyzed the install log here C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\yyyymmdd_hhmmss\Detail.txt and I’ve found that the rule “Sql2005SsmsExpressFacet” is the one in charge of this check and it look for existance of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x86) HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x64) In my registry I’ve found that key existsing, due to the installation of the uber-cool Red-Gate SQL Search. I removed the registry key and here it is! SQL Server 2008 R2 is installing while I’m writing this post. A note to Microsoft: can you please add more detailed information on the setup while such error happens. Just saying “you have SQL Server 2005 Express installed” is not enough. Please show us what the rule look for and why it has failed directly in the Detailed Report, so that we don’t have to spend time to look for the needle in the logs. Thanks! :) PS I did a side-by-side installation with the existing SQL Server 2008 instance. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Can we replace XML with JSON entirely?

    - by Saeed Neamati
    I'm sure lots of developers are familiar with XML and JSON, and they've used both of them. Thus no point in explaining what they are, and what is their purpose, even in brief. If we try to map their concepts, we can say (correct me if I'm wrong): XML tags are equivalent to JSON {} XML attributes are equivalent to JSON properties XML tag collection is equivalent to JSON [] The only thing I can think of, which doesn't exist in JSON, is XML Namespaces. The question is, considering this mapping, and considering that JSON is highly lighter in this mapping, can we see a world in future (or at least theoretically think of a world) without XML, but with JSON doing everything XML does? Can we use JSON everywhere XML is used? PS: Please note that I've seen this question. It's something entirely different from what I'm asking here. Thus please don't mention duplicate.

    Read the article

  • Six in Six - SQL Server 2012 Webinars

    - by JustinL
    We're running six webinars over the next six months covering our experiences with SQL Server 2012 and customer deployments. I'm presenting the first on upgrading to SQL Server 2012 next month, subsequent sessions will be delivered by colleagues: NOVEMBER: SQL Server 2012 Upgrade Approach and considerations. Friday 23rd November 12:00 – 13:00 Present approaches for upgrade testing, managing risk and rollback. The session will include details on minimizing downtime and upgrading from SQL Server 2000, 2005, and 2008 including.... More details and register. DECEMBER: Delivering Mission Critical BI with SQL Server 2012– Friday 14th December 12:00-13:00 Information is the lifeblood of many organisations and the availability of timely, accurate information is critical to strategic decision making. This session covers the features and capabilities… More details and register. JANUARY: Architecting Highly Available solutions with SQL Server 2012 – Friday 18th January 12:00- 13:00 Overview and comparison of the high availability features available within SQL Server 2012. The session considers business requirements for availability and recoverability and presents a number of alternative solution designs to meet… More details and register. FEBRUARY: Private cloud deployments with SQL Server 2012 – Friday 15th February 12:00- 13:00 Cloud based technology provide cost effective scale and flexibility. This session provides an overview of the benefits organisations can realise through private cloud… More details and register. MARCH: Visualising data patterns with SQL Server 2012 – Friday 22nd March 12:00- 13:00 This webinar demonstrates the ease of delivering business insight by exploring information and identifying trends through data visualisation. SQL Server 2012 provides new capability with enhanced performance and … More details and register. APRIL: Architecting Highly Available solutions with SQL Server 2012 – Friday 26th April 12:00- 13:00 Customers are increasingly interested in leveraging the benefits of cloud based solutions to provide scalable and flexible infrastructure to host their applications. This session looks at common design patterns and workloads… More details and register. Justin Langford - Coeo Ltd SQL Server Consultants | SQL Server Remote DBA

    Read the article

  • Loading a new instance of a class through XML not working quite right

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • Oracle SQL Developer: Fetching SQL Statement Result Sets

    - by thatjeffsmith
    Running queries, browsing tables – you are often faced with many thousands, if not millions, of rows. Most people are happy with looking at the first few rows. But occasionally you need to see more. SQL Developer doesn’t show you all records, all at once. Instead, it brings the records down in ‘chunks,’ or as-needed. How It Works There is a preference that tells SQL Developer how many records to get in a single request, or ‘fetch’ of records. The default is 50… So if I run a query that returns MORE than 50 rows: There’s more than 50 records in this resultset, but we have 50 in the grid to start with. We don’t know how many records are in this result set actually. To show the record count here, we actually go physically query the database with a row count type query. All we know is that the query has finished executing, and that there are rows available to go fetch. It tells us when it’s done. As you scroll through the grid, if you get to record 50 and scroll more, we’ll get 50 more records. Or, you can cheat to get to the ‘bottom’ of the result set. You can ask SQL Developer to just to get all the records at once… Once all the records have been fetched, you’ll see this: All rows fetched! A word of caution There’s a reason we have the default set to 50 and not 1000. Bringing back data can get expensive and heavy. We’ve found the best performance to be found in that 50 to 200 record range.

    Read the article

  • Receive the Broadcast program with XML [on hold]

    - by bitmez4
    I have a channel publishing sites and I wanna get into the channel CNN broadcast program.. CNN broadcast the program here: (you can see in source - xml File) http://tvprofil.net/xmltv/data/cnn.info/weekly_cnn.info_tvprofil.net.xml How the data according to the time of withdrawal? For example: Now program: "bitmez's table" next program: "stack's table" in 30 minute Is this possible? UPDATE 1 // -I can take the XML data but to all of XML file- <?php if(!$xml=simplexml_load_file('http://tvprofil.net/xmltv/data/cnn.info/weekly_cnn.info_tvprofil.net.xml')){ trigger_error('XML file -- read error',E_USER_ERROR); } echo 'X-'; foreach($xml as $programme){ echo 'Now: '.$programme->title.' <br/>'; } ?>

    Read the article

  • Updating Xml attributes with new values in a SQL Server 2008 table

    - by SMD
    I have a table in SQL Server 2008 that it has some columns. One of these columns is in Xml format and I want to update some attributes. For example my Xml column's name is XmlText and it's value in 5 first rows is such as: <Identification Name="John" Family="Brown" Age="30" /> <Identification Name="Smith" Family="Johnson" Age="35" /> <Identification Name="Jessy" Family="Albert" Age="60" /> <Identification Name="Mike" Family="Brown" Age="23" /> <Identification Name="Sarah" Family="Johnson" Age="30" /> and I want to change all Age attributes that are 30 to 40 such as below: <Identification Name="John" Family="Brown" Age="40" /> <Identification Name="Smith" Family="Johnson" Age="35" /> <Identification Name="Jessy" Family="Albert" Age="60" /> <Identification Name="Mike" Family="Brown" Age="23" /> <Identification Name="Sarah" Family="Johnson" Age="40" />

    Read the article

  • Unit Testing XML independent of physical XML file

    - by RAbraham
    Hi, My question is: In JUnit, How do I setup xml data for my System Under Test(SUT) without making the SUT read from an XML file physically stored on the file system Background: I am given a XML file which contains rules for creation of an invoice. My job is to convert these rules from XMl to Java Objects e.g. If there is a tag as below in my XML file which indicates that after a period of 30 days, the transaction cannot be invoiced <ExpirationDay>30</ExpirationDay> this converts to a Java class , say ExpirationDateInvoicingRule I have a class InvoiceConfiguration which should take the XML file and create the *InvoicingRule objects. I am thinking of using StAX to parse the XML document within InvoiceConfiguration Problem: I want to unit test InvoiceConfiguration. But I dont want InvoiceConfiguration to read from an xml file physically on the file system . I want my unit test to be independent of any physical stored xml file. I want to create a xml representation in memory. But a StAX parser only takes FileReader( or I can play with the File Object)

    Read the article

  • LINQ to SQL vs Entity Framework for an app with a future SQL Azure version

    - by Craig L
    I've got a vertical market Dot Net Framework 1.1 C#/WinForms/SQL Server 2000 application. Currently it uses ADO.Net and Microsoft's SQLHelper for CRUD operations. I've successfully converted it to Dot Net Framework 4 C#/WinForms/ SQL Server 2008. What I'd like to do is also offer my customers the ability to use SQL Azure as a backend storage for their data instead of local/LAN SQL Server. If I know SQL Azure is in my application's future, should I: A. Switch to LINQ to SQL B. Swith to Entity Framework C. Stick with ADO.Net and SQLHelper Thanks !

    Read the article

  • Database Firewall

    - by ???02
    Database Firewall?????SQL????????SQL????????????WEB?????HTTP??????SQL????????SQL????????????????????????????????????????????????????????????SQL??????????????????????????????????????·WEB???????????????????·??????????????????WEB???????????WEB??????????Web?????????????????IPA???????????SQL?????????????????????SQL??????????????????????SQL????????????????WEB??????????????????????????????????????????????????????????????????????????????????????????????WEB????????????????????????????????????????B to B?B to C???WEB????????????????????????????????????????????????????????????????????????????WEB?????????????????????WEB??????????????????·??????????????????????????????????????????????????????????????????????????????????????????WEB???????SQL?????????????????????????????Oracle Database Firewall???SQL??????????????????????????????????????????????????????????????????????????????2011?Oracle Database Firewall?????????·????????????????????????????????????????Oracle Database Firewall??????????????????SQL?????????·?????????????????????? Oracle Database Fireawall ?Oracle Database Firewall???????????????????????????????????????????????????????????SQL??????????????????????????????????????????2????????????????????SQL???Pass?Block????????????SQL?????????????????????????????????????????????????????????????????????????????????SQL????????????????????????????????????????????????????????????????????????????????????????????SQL?Oracle Database Firewall???????????????????SQL????????????????WEB??????????Oracle Database Firewall???????????????????????????????????Oracle Database??????SQL Server?DB2?Sybase??????????2????????Oracle Database Firewall?????????????????????????????????????????????????Oracle Database Firewall?????????????????????????????????????????????SQL???????????????????????????????????????????????????????????SPAN???(?????????)?????????????????????????????????SQL???????????????????????????????????????????SQL?Block???Pass??????????????????????????????????IDS?IPS????????????????????WAF (Web Application Firewall)? ??????????????????????????????????Database Firewall???????????????SQL????????????SQL????400????????????????(ISO/IEC 9075)??????????Oracle Database Firewall???????????????????????????????????????????SQL?????????????????????SQL??????Oracle Database Firewall??Oracle Database, SQL Server, DB2??????????????????????SQL???????????????????SQL??????????????????????????????????SQL???????????????????????????????????SQL????????????????????????????????????Oracle Database Firewall???SQL??????Oracle Database Firewall?????????????????????????????????????????????????????????????????????????????(Oracle Database???10gR2??XML??????????????????????????????????????????????????????????????Oracle Database???????????????????Oracle Database???????????????????????????????)???Oracle Database Firewall??????????????????????????????????????????????????????????????Oracle Database Firewall????????????????????????????????????????????????????????? ?????? Oracle Direct

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • How to avoid the "divide by zero" error in SQL?

    - by Henrik Staun Poulsen
    I hate this error message: Msg 8134, Level 16, State 1, Line 1 Divide by zero error encountered. What is the best way to write SQL code, so that I will never see this error message again? I mean, I could add a where clause so that my divisor is never zero. Or I could add a case statement, so that there is a special treatment for zero. Is the best way to use a NullIf clause? Is there better way, or how can this be enforced?

    Read the article

  • Dynamic openrowset in T-Sql Function or viable alternative?

    - by IronicMuffin
    I'm not quite sure how to phrase this. Here is the problem: I have 1-n items that I need to join to a different system (AS400) to get some data. The openrowset takes forever if I specify the where criteria outside of the openrowset, e.g.: select * from openrowset('my connection string', 'select code, myfield from myTable') where code = @code My idea was to create a function that takes in the item number and uses dynamic sql to inject it into the openrowset string, a la: declare @cmd varchar(1000) set @cmd = 'select * from openrowset('my connection string', ''select code, myfield from myTable where code = ' + @code + ''')' Apparently I can't use the insert.. exec.. strategy inside of a function. Is there any better way to achieve this? I was going to use this in joins where I needed the external data using cross apply. I'm not married to tvf and cross apply, but I do need a method of getting this data quickly. Thanks for any help.

    Read the article

  • Generated LinqtoSql Sql 5x slower than SAME EXACT hand-written sql

    - by JasonM
    I have a sql statement which is hardcoded in an existing VB6 app. I'm upgrading a new version in C# and using Linq To Sql. I was able to get LinqToSql to generate the same sql (before I start refactoring), but for some reason the Sql generated by LinqToSql is 5x slower than the original sql. This is running the generated Sql Directly in LinqPad. The only real difference my meager sql eyes can spot is the WITH (NOLOCK), which if I add into the LinqToSql generated sql, makes no difference. Can someone point out what I'm doing wrong here? Thanks! Existing Hard Coded Sql (5.0 Seconds) SELECT DISTINCT CH.ClaimNum, CH.AcnProvID, CH.AcnPatID, CH.TinNum, CH.Diag1, CH.GroupNum, CH.AllowedTotal FROM Claims.dbo.T_ClaimsHeader AS CH WITH (NOLOCK) WHERE CH.ContractID IN ('123A','123B','123C','123D','123E','123F','123G','123H') AND ( ( (CH.Transmited Is Null or CH.Transmited = '') AND CH.DateTransmit Is Null AND CH.EobDate Is Null AND CH.ProcessFlag IN ('Y','E') AND CH.DataSource NOT IN ('A','EC','EU') AND CH.AllowedTotal > 0 ) ) ORDER BY CH.AcnPatID, CH.ClaimNum Generated Sql from LinqToSql (27.6 Seconds) -- Region Parameters DECLARE @p0 NVarChar(4) SET @p0 = '123A' DECLARE @p1 NVarChar(4) SET @p1 = '123B' DECLARE @p2 NVarChar(4) SET @p2 = '123C' DECLARE @p3 NVarChar(4) SET @p3 = '123D' DECLARE @p4 NVarChar(4) SET @p4 = '123E' DECLARE @p5 NVarChar(4) SET @p5 = '123F' DECLARE @p6 NVarChar(4) SET @p6 = '123G' DECLARE @p7 NVarChar(4) SET @p7 = '123H' DECLARE @p8 VarChar(1) SET @p8 = '' DECLARE @p9 NVarChar(1) SET @p9 = 'Y' DECLARE @p10 NVarChar(1) SET @p10 = 'E' DECLARE @p11 NVarChar(1) SET @p11 = 'A' DECLARE @p12 NVarChar(2) SET @p12 = 'EC' DECLARE @p13 NVarChar(2) SET @p13 = 'EU' DECLARE @p14 Decimal(5,4) SET @p14 = 0 -- EndRegion SELECT DISTINCT [t0].[ClaimNum], [t0].[acnprovid] AS [AcnProvID], [t0].[acnpatid] AS [AcnPatID], [t0].[tinnum] AS [TinNum], [t0].[diag1] AS [Diag1], [t0].[GroupNum], [t0].[allowedtotal] AS [AllowedTotal] FROM [Claims].[dbo].[T_ClaimsHeader] AS [t0] WHERE ([t0].[contractid] IN (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7)) AND (([t0].[Transmited] IS NULL) OR ([t0].[Transmited] = @p8)) AND ([t0].[DATETRANSMIT] IS NULL) AND ([t0].[EOBDATE] IS NULL) AND ([t0].[PROCESSFLAG] IN (@p9, @p10)) AND (NOT ([t0].[DataSource] IN (@p11, @p12, @p13))) AND ([t0].[allowedtotal] > @p14) ORDER BY [t0].[acnpatid], [t0].[ClaimNum] New LinqToSql Code (30+ seconds... Times out ) var contractIds = T_ContractDatas.Where(x => x.EdiSubmissionGroupID == "123-01").Select(x => x.CONTRACTID).ToList(); var processFlags = new List<string> {"Y","E"}; var dataSource = new List<string> {"A","EC","EU"}; var results = (from claims in T_ClaimsHeaders where contractIds.Contains(claims.contractid) && (claims.Transmited == null || claims.Transmited == string.Empty ) && claims.DATETRANSMIT == null && claims.EOBDATE == null && processFlags.Contains(claims.PROCESSFLAG) && !dataSource.Contains(claims.DataSource) && claims.allowedtotal > 0 select new { ClaimNum = claims.ClaimNum, AcnProvID = claims.acnprovid, AcnPatID = claims.acnpatid, TinNum = claims.tinnum, Diag1 = claims.diag1, GroupNum = claims.GroupNum, AllowedTotal = claims.allowedtotal }).OrderBy(x => x.ClaimNum).OrderBy(x => x.AcnPatID).Distinct(); I'm using the list of constants above to make LinqToSql Generate IN ('xxx','xxx',etc) Otherwise it uses subqueries which are just as slow...

    Read the article

  • XML deserialization doubling up on entities

    - by Nathan Loding
    I have an XML file that I am attempting to deserialize into it's respective objects. It works great on most of these objects, except for one item that is being doubled up on. Here's the relevant portion of the XML: <Clients> <Client Name="My Company" SiteID="1" GUID="xxx-xxx-xxx-xxx"> <Reports> <Report Name="First Report" Path="/Custom/FirstReport"> <Generate>true</Generate> </Report> </Reports> </Client> </Clients> "Clients" is a List<Client> object. Each Client object has a List<Report> object within it. The issue is that when this XML is deserialized, the List<Report> object has a count of 2 -- the "First Report" Report object is in there twice. Why? Here's the C#: public class Client { [System.Xml.Serialization.XmlArray("Reports"), System.Xml.Serialization.XmlArrayItem(typeof(Report))] public List<Report> Reports; } public class Report { [System.Xml.Serialization.XmlAttribute("Name")] public string Name; public bool Generate; [System.Xml.Serialization.XmlAttribute("Path")] public string Path; } class Program { static void Main(string[] args) { List<Client> _clients = new List<Client>(); string xmlFile = "myxmlfile.xml"; System.Xml.Serialization.XmlSerializer xmlSerializer = new System.Xml.Serialization.XmlSerializer(typeof(List<Client>), new System.Xml.Serialization.XmlRootAttribute("Clients")); using (FileStream stream = new FileStream(xmlFile, FileMode.Open)) { _clients = xmlSerializer.Deserialize(stream) as List<Client>; } foreach(Client _client in _clients) { Console.WriteLine("Count: " + _client.Reports.Count); // This write "2" foreach(Report _report in _client.Reports) { Console.WriteLine("Name: " + _report.Name); // Writes "First Report" twice } } } }

    Read the article

  • SQL Developer Debugging, Watches, Smart Data, & Data

    - by thatjeffsmith
    After presenting the SQL Developer PL/SQL debugger for about an hour yesterday at KScope12 in San Antonio, my boss came up and asked, “Now, would you really want to know what the Smart Data panel does?” Apparently I had ‘made up’ my own story about what that panel’s intent is based on my experience with it. Not good Jeff, not good. It was a very small point of my presentation, but I probably should have read the docs. The Smart Data tab displays information about variables, using your Debugger: Smart Data preferences. You can also specify these preferences by right-clicking in the Smart Data window and selecting Preferences. Debugger Smart Data Preferences, control number of variables to display The Smart Data panel auto-inspects the last X accessed variables. So if you have a program with 26 variables, instead of showing you all 26, it will just show you the last two variables that were referenced in your program. If you were to click on the ‘Data’ debug panel, you’ll see EVERYTHING. And if you only want to see a very specific set of values, then you should use Watches. The Smart Data Panel As I step through the code, the variables being tracked change as they are referenced. Only the most recent ones display. This is controlled by the ‘Maximum Locations to Remember’ preference. Step through the code, see the latest variables accessed The Data Panel All variables are displayed. Might be information overload on large PL/SQL programs where you have many dozens or even hundreds of variables to track. Shows everything all the time Watches Watches are added manually and only show what you ask for. Data on Demand – add a watch to track a specific variable Remember, you can interact with your data If you want to do more than just watch, you can mouse-right on a data element, and change the value of the variable as the program is running. This is one of the primary benefits to debugging over using DBMS_OUTPUT to track what’s happening in your program. Change the values while the program is running to test your ‘What if?’ scenarios

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • SQL, moving million records from a database to other database [migrated]

    - by Ryoma
    I am a C# developer, I am not really good with SQL. I have a simple questions here. I need to move more than 50 millions records from a database to other database. I tried to use the import function in ms SQL, however it got stuck because the log was full (I got an error message The transaction log for database 'mydatabase' is full due to 'LOG_BACKUP'). The database recovery model was set to simple. My friend said that importing millions records using task-import data will cause the log to be massive and told me to use loop instead to transfer the data, does anyone know how and why? thanks in advance

    Read the article

  • Migrating MOSS 2007 from SQL 2000 to SQL 2005 - Addendum

    - by lunacrescens
    This is a continuation of an earlier question I had about moving the databases for a MOSS 2007 installation from SQL 2000 to SQL 2005. Here's the URL for the original question: http://stackoverflow.com/questions/254517/migrating-moss-2007-from-sql-2000-to-sql-2005 In my test environment, I've successfully moved the databases to the SQL 2005 test machine and things appear to be working fine. But, on the "Servers in Farm" page of the Central Admin | Operations, it still shows the old (i.e. SQL 2000) server as the Configuration Database Server. Also, it shows the old config database as being the Configuration Database. I know that the SQL2000 server and old config database (that are showing on this page) are NOT being used, because we've deactived the SQL instance in SQL2000. I've tried "removing" the server, and get a message about "Uninstalling SharePoint products and technologies" being the better route. So, I disconnected from the test databases, uninstalled SharePoint from the test WFE server, and reinstalled it. That didn't do anything. Before uninstalling/reinstalling I also tried simply rerunning the SharePoint Configuration wizard, and that didn't do anything either. Does anyone know how to update the Config Server and Config Database on the "Servers in Farm" page after having moved the Config and Content DBs? Is there something I'm missing or overlooking? Thanks.

    Read the article

  • sql server 2008 insert statement question

    - by user61752
    I am learning sql server 2008 t-sql. To insert a varchar type, I just need to insert a string 'abc', but for nvarchar type, I need to add N in front (N'abc'). I have a table employee, it has 2 fields, firstname and lastname, they are both nvarchar(20). insert into employee values('abc', 'def'); I test it, it works, seems like N is not required. Why we need to add N in front for nvarchar type, what's the pro or con if we are not using it?

    Read the article

  • Read XML Files using LINQ to XML and Extension Methods

    - by psheriff
    In previous blog posts I have discussed how to use XML files to store data in your applications. I showed you how to read those XML files from your project and get XML from a WCF service. One of the problems with reading XML files is when elements or attributes are missing. If you try to read that missing data, then a null value is returned. This can cause a problem if you are trying to load that data into an object and a null is read. This blog post will show you how to create extension methods to detect null values and return valid values to load into your object. The XML Data An XML data file called Product.xml is located in the \Xml folder of the Silverlight sample project for this blog post. This XML file contains several rows of product data that will be used in each of the samples for this post. Each row has 4 attributes; namely ProductId, ProductName, IntroductionDate and Price. <Products>  <Product ProductId="1"           ProductName="Haystack Code Generator for .NET"           IntroductionDate="07/01/2010"  Price="799" />  <Product ProductId="2"           ProductName="ASP.Net Jumpstart Samples"           IntroductionDate="05/24/2005"  Price="0" />  ...  ...</Products> The Product Class Just as you create an Entity class to map each column in a table to a property in a class, you should do the same for an XML file too. In this case you will create a Product class with properties for each of the attributes in each element of product data. The following code listing shows the Product class. public class Product : CommonBase{  public const string XmlFile = @"Xml/Product.xml";   private string _ProductName;  private int _ProductId;  private DateTime _IntroductionDate;  private decimal _Price;   public string ProductName  {    get { return _ProductName; }    set {      if (_ProductName != value) {        _ProductName = value;        RaisePropertyChanged("ProductName");      }    }  }   public int ProductId  {    get { return _ProductId; }    set {      if (_ProductId != value) {        _ProductId = value;        RaisePropertyChanged("ProductId");      }    }  }   public DateTime IntroductionDate  {    get { return _IntroductionDate; }    set {      if (_IntroductionDate != value) {        _IntroductionDate = value;        RaisePropertyChanged("IntroductionDate");      }    }  }   public decimal Price  {    get { return _Price; }    set {      if (_Price != value) {        _Price = value;        RaisePropertyChanged("Price");      }    }  }} NOTE: The CommonBase class that the Product class inherits from simply implements the INotifyPropertyChanged event in order to inform your XAML UI of any property changes. You can see this class in the sample you download for this blog post. Reading Data When using LINQ to XML you call the Load method of the XElement class to load the XML file. Once the XML file has been loaded, you write a LINQ query to iterate over the “Product” Descendants in the XML file. The “select” portion of the LINQ query creates a new Product object for each row in the XML file. You retrieve each attribute by passing each attribute name to the Attribute() method and retrieving the data from the “Value” property. The Value property will return a null if there is no data, or will return the string value of the attribute. The Convert class is used to convert the value retrieved into the appropriate data type required by the Product class. private void LoadProducts(){  XElement xElem = null;   try  {    xElem = XElement.Load(Product.XmlFile);     // The following will NOT work if you have missing attributes    var products =         from elem in xElem.Descendants("Product")        orderby elem.Attribute("ProductName").Value        select new Product        {          ProductId = Convert.ToInt32(            elem.Attribute("ProductId").Value),          ProductName = Convert.ToString(            elem.Attribute("ProductName").Value),          IntroductionDate = Convert.ToDateTime(            elem.Attribute("IntroductionDate").Value),          Price = Convert.ToDecimal(elem.Attribute("Price").Value)        };     lstData.DataContext = products;  }  catch (Exception ex)  {    MessageBox.Show(ex.Message);  }} This is where the problem comes in. If you have any missing attributes in any of the rows in the XML file, or if the data in the ProductId or IntroductionDate is not of the appropriate type, then this code will fail! The reason? There is no built-in check to ensure that the correct type of data is contained in the XML file. This is where extension methods can come in real handy. Using Extension Methods Instead of using the Convert class to perform type conversions as you just saw, create a set of extension methods attached to the XAttribute class. These extension methods will perform null-checking and ensure that a valid value is passed back instead of an exception being thrown if there is invalid data in your XML file. private void LoadProducts(){  var xElem = XElement.Load(Product.XmlFile);   var products =       from elem in xElem.Descendants("Product")      orderby elem.Attribute("ProductName").Value      select new Product      {        ProductId = elem.Attribute("ProductId").GetAsInteger(),        ProductName = elem.Attribute("ProductName").GetAsString(),        IntroductionDate =            elem.Attribute("IntroductionDate").GetAsDateTime(),        Price = elem.Attribute("Price").GetAsDecimal()      };   lstData.DataContext = products;} Writing Extension Methods To create an extension method you will create a class with any name you like. In the code listing below is a class named XmlExtensionMethods. This listing just shows a couple of the available methods such as GetAsString and GetAsInteger. These methods are just like any other method you would write except when you pass in the parameter you prefix the type with the keyword “this”. This lets the compiler know that it should add this method to the class specified in the parameter. public static class XmlExtensionMethods{  public static string GetAsString(this XAttribute attr)  {    string ret = string.Empty;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      ret = attr.Value;    }     return ret;  }   public static int GetAsInteger(this XAttribute attr)  {    int ret = 0;    int value = 0;     if (attr != null && !string.IsNullOrEmpty(attr.Value))    {      if(int.TryParse(attr.Value, out value))        ret = value;    }     return ret;  }   ...  ...} Each of the methods in the XmlExtensionMethods class should inspect the XAttribute to ensure it is not null and that the value in the attribute is not null. If the value is null, then a default value will be returned such as an empty string or a 0 for a numeric value. Summary Extension methods are a great way to simplify your code and provide protection to ensure problems do not occur when reading data. You will probably want to create more extension methods to handle XElement objects as well for when you use element-based XML. Feel free to extend these extension methods to accept a parameter which would be the default value if a null value is detected, or any other parameters you wish. NOTE: You can download the complete sample code at my website. http://www.pdsa.com/downloads. Choose “Tips & Tricks”, then "Read XML Files using LINQ to XML and Extension Methods" from the drop-down. Good Luck with your Coding,Paul D. Sheriff  

    Read the article

  • JBoss AS: use .xml files in the properties-service.xml

    - by fgysin
    The properties service (configured in properties-service.xml) in JBoss application server lets you specify external .properties files that are loaded and can then be accessed as system properties from the deployed applications. (See here http://community.jboss.org/wiki/PropertiesService for more info...) Is it also possible to load config files in the .xml format instead of .properties? I know it is possible for certain given configs like for example the mail-service.xml and the jboss-log4j.xml... But they are both loaded directly by JBoss, and not via the properties service.

    Read the article

  • Proper Settings for SQL Server 2005 Profiler User Auditing

    - by Irwin M. Fletcher
    I have been tasked with creating a catalog of all users\applications that access a particular database at our company. The issue is that we have no idea what users are using the database and what they are doing (what queries they are using or stored procedures). I would like to run this over a few days, but want to minimize the amount of information that is returned in the trace, could anyone please advise me on what the minimal settings for Profiler are for this.

    Read the article

  • Deleted user default database - SQL Server 2008

    - by RadiantHex
    Hi folks, I cannot connect to the database anymore, I'm getting: Cannot open user default database. Login failed. I have deleted the database during a previous session and then tried to recreate it. But the recreate failed. Now I am stuck with this error, what can I do? Edit: I'm using Windows Authentication Any ideas? Fixed: use the command: sqlcmd -E -d master then type: ALTER LOGIN [Your Windows Login] WITH DEFAULT_DATABASE=master GO :)

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >