Search Results

Search found 41565 results on 1663 pages for 'sql xml'.

Page 119/1663 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Fastest way to remove non-numeric characters from a VARCHAR in SQL Server

    - by Dan Herbert
    I'm writing an import utility that is using phone numbers as a unique key within the import. I need to check that the phone number does not already exist in my DB. The problem is that phone numbers in the DB could have things like dashes and parenthesis and possibly other things. I wrote a function to remove these things, the problem is that it is slow and with thousands of records in my DB and thousands of records to import at once, this process can be unacceptably slow. I've already made the phone number column an index. I tried using the script from this post: http://stackoverflow.com/questions/52315/t-sql-trim-nbsp-and-other-non-alphanumeric-characters But that didn't speed it up any. Is there a faster way to remove non-numeric characters? Something that can perform well when 10,000 to 100,000 records have to be compared. Whatever is done needs to perform fast. Update Given what people responded with, I think I'm going to have to clean the fields before I run the import utility. To answer the question of what I'm writing the import utility in, it is a C# app. I'm comparing BIGINT to BIGINT now, with no need to alter DB data and I'm still taking a performance hit with a very small set of data (about 2000 records). Could comparing BIGINT to BIGINT be slowing things down? I've optimized the code side of my app as much as I can (removed regexes, removed unneccessary DB calls). Although I can't isolate SQL as the source of the problem anymore, I still feel like it is.

    Read the article

  • Concatenate row values T-SQL

    - by Robert
    I am trying to pull together some data for a report and need to concatenate the row values of one of the tables. Here is the basic table structure: Reviews ReviewID ReviewDate Reviewers ReviewerID ReviewID UserID Users UserID FName LName This is a M:M relationship. Each Review can have many Reviewers; each User can be associated with many Reviews. Basically, all I want to see is Reviews.ReviewID, Reviews.ReviewDate, and a concatenated string of the FName's of all the associated Users for that Review (comma delimited). Instead of: ReviewID---ReviewDate---User 1----------12/1/2009----Bob 1----------12/1/2009----Joe 1----------12/1/2009----Frank 2----------12/9/2009----Sue 2----------12/9/2009----Alice Display this: ReviewID---ReviewDate----Users 1----------12/1/2009-----Bob, Joe, Frank 2----------12/9/2009-----Sue, Alice I have found this article describing some ways to do this, but most of these seem to only deal with one table, not multiple; unfortunately, my SQL-fu is not strong enough to adapt these to my circumstances. I am particularly interested in the example on that site which utilizes FOR XML PATH() as that looks the cleanest and most straight forward. SELECT p1.CategoryId, ( SELECT ProductName + ', ' FROM Northwind.dbo.Products p2 WHERE p2.CategoryId = p1.CategoryId ORDER BY ProductName FOR XML PATH('') ) AS Products FROM Northwind.dbo.Products p1 GROUP BY CategoryId; Can anyone give me a hand with this? Any help would be greatly appreciated!

    Read the article

  • Can I select 0 columns in SQL Server?

    - by Woody Zenfell III
    I am hoping this question fares a little better than the similar Create a table without columns. Yes, I am asking about something that will strike most as pointlessly academic. It is easy to produce a SELECT result with 0 rows (but with columns), e.g. SELECT a = 1 WHERE 1 = 0. Is it possible to produce a SELECT result with 0 columns (but with rows)? e.g. something like SELECT NO COLUMNS FROM Foo. (This is not valid T-SQL.) I came across this because I wanted to insert several rows without specifying any column data for any of them. e.g. (SQL Server 2005) CREATE TABLE Bar (id INT NOT NULL IDENTITY PRIMARY KEY) INSERT INTO Bar SELECT NO COLUMNS FROM Foo -- Invalid column name 'NO'. -- An explicit value for the identity column in table 'Bar' can only be specified when a column list is used and IDENTITY_INSERT is ON. One can insert a single row without specifying any column data, e.g. INSERT INTO Foo DEFAULT VALUES. One can query for a count of rows (without retrieving actual column data from the table), e.g. SELECT COUNT(*) FROM Foo. (But that result set, of course, has a column.) I tried things like INSERT INTO Bar () SELECT * FROM Foo -- Parameters supplied for object 'Bar' which is not a function. -- If the parameters are intended as a table hint, a WITH keyword is required. and INSERT INTO Bar DEFAULT VALUES SELECT * FROM Foo -- which is a standalone INSERT statement followed by a standalone SELECT statement. I can do what I need to do a different way, but the apparent lack of consistency in support for degenerate cases surprises me. I read through the relevant sections of BOL and didn't see anything. I was surprised to come up with nothing via Google either.

    Read the article

  • Parsing XML with the PHP XMLReader

    - by coffeecoder
    Hi Guys, I am writing program that reads in some XML from the $_POST variable and then parses using the PHP XMLReader and the data extracted input into a database. I am using the XMLReader as the XML supplied will more than likely be too big to place into memory. However I am having some issues, my XML and basic code as are follows: '<?xml version="1.0"?> <data_root> <data> <info>value</info> </data> <action>value</action> </data_root>' $request = $_REQUEST['xml']; $reader = new XMLReader(); $reader->XML($request); while($reader->read()){ //processing code } $reader->close() My problem is that the code will work perfectly if the XML being passed does not have the <?xml version="1.0"?> line, but if i include it, and it will be included when the application goes into a live production environment, the $reader->read() code for the while loop does not work and the XML is not parsed within the while loop. Has anyone seen similar behaviour before or knows why this could be happening? Thanks in advance.

    Read the article

  • How to Convert using of SqlLit to Simple SQL command in C#

    - by Nasser Hajloo
    I want to get start with DayPilot control I do not use SQLLite and this control documented based on SQLLite. I want to use SQL instead of SQL Lite so if you can, please do this for me. main site with samples http://www.daypilot.org/calendar-tutorial.html The database contains a single table with the following structure CREATE TABLE event ( id VARCHAR(50), name VARCHAR(50), eventstart DATETIME, eventend DATETIME); Loading Events private DataTable dbGetEvents(DateTime start, int days) { SQLiteDataAdapter da = new SQLiteDataAdapter("SELECT [id], [name], [eventstart], [eventend] FROM [event] WHERE NOT (([eventend] <= @start) OR ([eventstart] >= @end))", ConfigurationManager.ConnectionStrings["db"].ConnectionString); da.SelectCommand.Parameters.AddWithValue("start", start); da.SelectCommand.Parameters.AddWithValue("end", start.AddDays(days)); DataTable dt = new DataTable(); da.Fill(dt); return dt; } Update private void dbUpdateEvent(string id, DateTime start, DateTime end) { using (SQLiteConnection con = new SQLiteConnection(ConfigurationManager.ConnectionStrings["db"].ConnectionString)) { con.Open(); SQLiteCommand cmd = new SQLiteCommand("UPDATE [event] SET [eventstart] = @start, [eventend] = @end WHERE [id] = @id", con); cmd.Parameters.AddWithValue("id", id); cmd.Parameters.AddWithValue("start", start); cmd.Parameters.AddWithValue("end", end); cmd.ExecuteNonQuery(); } }

    Read the article

  • Linq to SQL not inserting data onto the DB

    - by Jesus Rodriguez
    Hello! I have a little / weird behaviour here and Im looking over internet and SO and I didn't find a response. I have to admit that this is my first time using databases, I know how to use them with SQL but never used it actually. Anyway, I have a problem with my app inserting data, I just created a very simple project for testing that and no solution yet. I have an example database with Sql Server Id - int (identity primary key) Name - nchar(10) (not null) The table is called "Person", simple as pie. I have this: static void Main(string[] args) { var db = new ExampleDBDataContext {Log = Console.Out}; var jesus = new Person {Name = "Jesus"}; db.Persons.InsertOnSubmit(jesus); db.SubmitChanges(); var query = from person in db.Persons select person; foreach (var p in query) { Console.WriteLine(p.Name); } } As you can see, nothing extrange. It show Jesus in the console. But if you see the table data, there is no data, just empty. I comment the object creation and insertion and the foreach doesn't print a thing (normal, there is no data in the database) The weird thing is that I created a row in the database manually and the Id was 2 and no 1 (Was the linq really playing with the database but it didn't create the row?) There is the log: INSERT INTO [dbo].Person VALUES (@p0) SELECT CONVERT(Int,SCOPE_IDENTITY()) AS [value] -- @p0: Input NChar (Size = 10; Prec = 0; Scale = 0) [Jesus] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 SELECT [t0].[Id], [t0].[Name] FROM [dbo].[Person] AS [t0] -- Context: SqlProvider(Sql2005) Model: AttributedMetaModel Build: 3.5.30729.4926 I am really confused, All the blogs / books use this kind of snippet to insert an element to a database. Thank you for helping.

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • Will JSON replace XML as a data format?

    - by 13ren
    When I first saw XML, I thought it was basically a representation of trees. Then I thought: the important thing isn't that it's a particularly good representation of trees, but that it is one that everyone agrees on. Just like ASCII. And once established, it's hard to displace due to network effects. The new alternative would have to be much better (maybe 10 times better) to displace it. Of course, ASCII has been (mostly) replaced by Unicode, for internationalization. According to google trends, XML has a x43 lead, but is declining - while JSON grows. Will JSON replace XML as a data format? (edited) for which tasks? for which programmers/industries? NOTES: S-expressions (from lisp) are another representation of trees, but which has not gained mainstream adoption. There are many, many other proposals, such as YAML and Protocol Buffers (for binary formats). I can see JSON dominating the space of communicating with client-side AJAX (AJAJ?), and this possibly could back-spread into other systems transitively. XML, being based on SGML, is better than JSON as a document format. I'm interested in XML as a data format. XML has an established ecosystem that JSON lacks, especially ways of defining formats (XML Schema) and transforming them (XSLT). XML also has many other standards, esp for web services - but their weight and complexity can arguably count against XML, and make people want a fresh start (similar to "web services" beginning as a fresh start over CORBA).

    Read the article

  • SQL Server 2005 Reporting Services and the Report Viewer

    - by Kendra
    I am having an issue embedding my report into an aspx page. Here's my setup: 1 Server running SQL Server 2005 and SQL Server 2005 Reporting Services 1 Workstation running XP and VS 2005 The server is not on a domain. Reporting Services is a default installation. I have one report called TestMe in a folder called TestReports using a shared datasource. If I view the report in Report Manager, it renders fine. If I view the report using the http ://myserver/reportserver url it renders fine. If I view the report using the http ://myserver/reportserver?/TestReports/TestMe it renders fine. If I try to view the report using http ://myserver/reportserver/TestReports/TestMe, it just goes to the folder navigation page of the home directory. My web application is impersonating somebody specific to get around the server not being on a domain. When I call the report from the report viewer using http ://myserver/reportserver as the server and /TestReports/TestMe as the path I get this error: For security reasons DTD is prohibited in this XML document. To enable DTD processing set the ProhibitDtd property on XmlReaderSettings to false and pass the settings into XmlReader.Create method. When I change the server to http ://myserver/reportserver? I get this error when I run the report: Client found response content type of '', but expected 'text/xml'. The request failed with an empty response. I have been searching for a while and haven't found anything that fixes my issue. Please let me know if there is more information needed. Thanks in advance, Kendra

    Read the article

  • SQL Query slow in .NET application but instantaneous in SQL Server Management Studio

    - by user203882
    Here is the SQL SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.trustaccountlogid = ( SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM' ) Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table. Users: Contains users and their details TrustAccount: A User can have multiple TrustAccounts. TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes. Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared. cmd.CommandTimeout = Configuration.DBTimeout; cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid = (SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))"; cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate; // And then... reader = cmd.ExecuteReader(); if (reader.Read()) { double value = (double)reader.GetValue(0); if (System.Double.IsNaN(value)) return 0; else return value; } else return 0;

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • How to read utf-8 xml from vbs and get correct character code

    - by vkjr
    I'm trying to read xml file from vbs script. Xml is encoded in utf-8 and has appropriate header From vbs script I use microsoft xmldom parser to read xml: Dim objXMLDoc Set objXMLDoc = CreateObject( "Microsoft.XMLDOM" ) objXMLDoc.load("vbs_strings.xml") Inside xml I'm trying to write character by code using &#nnn; notation. Then I read this character from vbscript and try to get it's code using Asc() function. For some characters it works fine and read code is equal to one written. But for some characters Asc() always returns code 63. What could it be? Examples: If xml contains <section>&#195;<section> and in script I have Section variable for representing this xml node then code: Asc(Section.Text) will return value 195 and it's ok. If xml contains <section>&#110;<section> then code: Asc(Section.Text) will return value 110 and it's ok. But if xml contains <section>&#130;<section> or <section>&#156;<section> or <section>&#140;<section> Asc(Section.Text) will return value 63 and it's definitely not good. Do you know why?

    Read the article

  • How do you remove invalid hexadecimal characters from an XML-based data source prior to constructing

    - by Oppositional
    Is there any easy/general way to clean an XML based data source prior to using it in an XmlReader so that I can gracefully consume XML data that is non-conformant to the hexadecimal character restrictions placed on XML? Note: The solution needs to handle XML data sources that use character encodings other than UTF-8, e.g. by specifying the character encoding at the XML document declaration. Not mangling the character encoding of the source while stripping invalid hexadecimal characters has been a major sticking point. The removal of invalid hexadecimal characters should only remove hexadecimal encoded values, as you can often find href values in data that happens to contains a string that would be a string match for a hexadecimal character. Background: I need to consume an XML-based data source that conforms to a specific format (think Atom or RSS feeds), but want to be able to consume data sources that have been published which contain invalid hexadecimal characters per the XML specification. In .NET if you have a Stream that represents the XML data source, and then attempt to parse it using an XmlReader and/or XPathDocument, an exception is raised due to the inclusion of invalid hexadecimal characters in the XML data. My current attempt to resolve this issue is to parse the Stream as a string and use a regular expression to remove and/or replace the invalid hexadecimal characters, but I am looking for a more performant solution.

    Read the article

  • SQL Where Clause Against View

    - by Adam Carr
    I have a view (actually, it's a table valued function, but the observed behavior is the same in both) that inner joins and left outer joins several other tables. When I query this view with a where clause similar to SELECT * FROM [v_MyView] WHERE [Name] like '%Doe, John%' ... the query is very slow, but if I do the following... SELECT * FROM [v_MyView] WHERE [ID] in ( SELECT [ID] FROM [v_MyView] WHERE [Name] like '%Doe, John%' ) it is MUCH faster. The first query is taking at least 2 minutes to return, if not longer where the second query will return in less than 5 seconds. Any suggestions on how I can improve this? If I run the whole command as one SQL statement (without the use of a view) it is very fast as well. I believe this result is because of how a view should behave as a table in that if a view has OUTER JOINS, GROUP BYS or TOP ##, if the where clause was interpreted prior to vs after the execution of the view, the results could differ. My question is why wouldn't SQL optimize my first query to something as efficient as my second query?

    Read the article

  • xml declaration not being omitted from page

    - by Mark Schultheiss
    I have an XSLT transform I am using to process an XML file, inserting it into the body of my aspx page. Reference the following for background information: background on xml/xslt I have the following in my xml file: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" xmlns:myCustomStrings="urn:customStrings"> <xsl:output method="xml" version="2.0" media-type="text/html" omit-xml-declaration="yes" indent="yes" />...unrelated stuff left out here Here is the output that is relevent: <div id="example" /> <?xml version="1.0" encoding="utf-8"?><div xmlns:myCustomStrings="urn:customStrings"><div id="imFormBody" class="imFormBody"> My question relates to the output, specifically to the <?xml version="1.0" encoding="utf-8"?> which is getting included in the output anyway. Is the issue related to the custom method I have used? If so, I don't really see the need to include the xml part as the namespace is in the div tag. Is there a way to ensure that this extra stuff gets left out as I asked it to?

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • sp_addlinkedserver on sql server 2005 giving problem

    - by Jit
    I am trying to create a link server of a remote database(both the servers are SQL serve2005). I am able to connect that remote server from my SQL Server management studio. I used the following syntax to create it. EXEC sp_addlinkedserver @server = N'LINKSQL2005', @srvproduct = N'', @provider = N'SQLNCLI', @provstr = N'SERVER=IP Address of remote server ;User ID=XXXXXX;Password=***' I have provided the IP addressntax. and user name and password in the above syntax. The link server is getting created. But when I try to execute a query on it I get the error below. Query Used. select * from LINKSQL2005.<DBName>.dbo.<TableName> OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Communication link failure". Msg 10054, Level 16, State 1, Line 0 TCP Provider: An existing connection was forcibly closed by the remote host. Msg 18456, Level 14, State 1, Line 0 Login failed for user 'sa'. OLE DB provider "SQLNCLI" for linked server "LINKSQL2005" returned message "Invalid connection string attribute". Pls help me, where am I making mistake.

    Read the article

  • Multi-level shop, xml or sql. best practice?

    - by danrichardson
    Hello, i have a general "best practice" question regarding building a multi-level shop, which i hope doesn't get marked down/deleted as i personally think it's quite a good "subjective" question. I am a developer in charge (in most part) of maintaining and evolving a cms system and associated front-end functionality. Over the past half year i have developed a multiple level shop system so that an infinite level of categories may exist down into a product level and all works fine. However over the last week or so i have questioned by own methods in front-end development and the best way to show the multi-level data structure. I currently use a sql server database (2000) and pull out all the shop levels and then process them into an enumerable typed list with child enumerable typed lists, so that all levels are sorted. This in my head seems quite process heavy, but we're not talking about thousands of rows, generally only 1-500 rows maybe. I have been toying with the idea recently of storing the structure in an XML document (as well as the database) and then sending last modified headers when serving/requesting the document for, which would then be processed as/when nessecary with an xsl(t) document - which would be processed server side. This is quite a handy, reusable method of storing the data but does it have more overheads in the fact im opening and closing files? And also the xml will require a bit of processing to pull out blocks of xml if for instance i wanted to show two level mid way through the tree for a side menu. I use the above method for sitemap purposes so there is currently already code i have built which does what i require, but unsure what the best process is to go about. Maybe a hybrid method which pulls out the data, sorts it and then makes an xml document/stream (XDocument/XmlDocument) for xsl processing is a good way? - This is the way i currently make the cms work for the shop. So really (and thanks for sticking with me on this), i am just wandering which methods other people use or recommend as being the best/most logical way of doing things. Thanks Dan

    Read the article

  • XML problem in the basic menu example

    - by arakn0
    Hi there, I am trying to create an app with some menus, an I am following the basic example available in the official android site: http://developer.android.com/guide/topics/ui/menus.html My problems appear when I define the menu in the XML. After creating the folder res/menu and creating the menu_option.xml file from eclipse.... The project (in general) gives an error that can be read from the Problems tab: Unparsed aapt error(s)! Check the console for output Android Packaging Problem So, changing to the Console tab to get more information about the problem, this can be read: [2010-06-02 11:35:54 - TestAudio] Error in an XML file: aborting build. [2010-06-02 11:35:54 - TestAudio] W/ResourceType(11566): Bad XML block: header size 63327 or total size -144759824 is larger than data size 0 [2010-06-02 11:35:54 - TestAudio] /home/User/workspace/TestAudio/res/menu/options_menu.xml:1: error: Error parsing XML: no element found The strange thing is that eclipse recognizes the menu items that I've defined in the XML,I can reference them in the code with no problems and my main activity builds. (and the rest of the files too). Could it be that when eclipse creates a file, for some reason, the Android SDK has problems to read it, or something similar?? The XML code is exactly the same as the one in the example, so I don't really know what is happening. The code in options_menu.xml is this: <menu xmlns:android="http://schemas.android.com/apk/res/android" <item android:id="@+id/new_game" android:title="New Game" / <item android:id="@+id/quit" android:title="Quit" / </menu Thanks in advance for your help!

    Read the article

  • how to use a generated dbml classes to deserialize xml via linq?

    - by Eelco Meuter
    Hi, I have a complex data structure, which I boiled down in a dbml file with one class and 6 one-to-many relations. This data must also be read via xml. The xml structure is something like: <table id=1> <column 1></column 1> <column n></column n> <m-n table x> <column 1></column 1> </m-n table x> </table> where the tag <m-n table x> is one of the six related tables. The idea is to generate an xsd based upon the dbml, which I can use to create and validate a xml. This xml can hopefully deserialized into the dbml classes. The question is: Can this be done? If so, how do I generate the xsd. I use a sql server express 2008 r2 as backend. Thanks in advance for your time!

    Read the article

  • jQuery XML loading and then innerfade effect

    - by Ryan Max
    Hello, I think I can explain myself without code, so for brevity's sake here we go: I am using jquery to pull data from an xml and put it into a ul on the page with each xml entry as a li. This is working great! However, what I am trying to do afterwards is use the innerfade plugin to make a simple animation between each of the li's. It's not working though, as it is still just loading the static list with each item visible (whereas if innerfade was working it would only display the first....then fade into the second, etc) It's not an innerfade problem however, because if I add the list in manually to the page (not injecting it with jquery) then the innerfade works fine. I'm relatively new to DOM scripting, so I think I am missing something here. I'm not quite sure how jQuery sequences everything, and I'm having trouble phrasing my question in a search engine friendly manner so here I am. Is it possible to have jquery pull the data from xml, then inject it into the page, then have innerfade work it's magic? Or am I thinking about this the wrong way? xml code: $.ajax({ type: "GET", url: "xml/playlist.xml", dataType: "xml", success: function(xml) { $(xml).find('song').each(function(){ var name = $(this).attr('title'); var date = $(this).attr('artist'); var message = $(this).attr('path'); $('<li></li>').html('<span id="an_name">'+name+'</span><span id="an_date">'+date+'</span><span id="an_message">'+message+'</span>').appendTo('#anniversary'); }); } }); innerfade code: <script type="text/javascript"> jQuery.noConflict(); jQuery(document).ready( function(){ jQuery('#anniversary').innerfade({ speed: 1000, timeout: 5000, type: 'sequence', }); });

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • Optimzing TSQL code

    - by adopilot
    My job is the maintain one application which heavy use SQL server (MSSQL2005). Until now middle server stores TSQL codes in XML and send dynamic TSQL queries without using stored procs. As I am able change those XML queries I want to migrate most of my queries to stored procs. Question is folowing: Most of my queries have same Where conditions against one table Sample: Select ..... from .... where .... and (a.vrsta_id = @vrsta_id or @vrsta_id = 0) and (a.podvrsta_id = @podvrsta_id or @podvrsta_id = 0) and (a.podgrupa_2 = @podgrupa2_id or @podgrupa2_id = 0) and ( (a.id in (select art_id from osobina_veze where podosobina_id in (select ado from dbo.fn_ado_param_int(@podosobina)) group by art_id having count(art_id)= @podosobina_count )) or ('0' = @podosobina) ) They also have same where conditions on other table. How I should organize my code ? What is proper way ? Should I make table valued function that I will use in all queries or use #Temp tables and simple inner join my query to that each time when proc executing? or use #temp filed by table valued function ? or leave all queries with this large where clause and hope that index is going to do their jobs. or use WITH(statement)

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >