Search Results

Search found 41565 results on 1663 pages for 'sql xml'.

Page 333/1663 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Count rows against to SQL server (2005) table?

    - by David.Chu.ca
    I have a simple question with two options to get count of rows in a SQL server (2005). I am using VS 2005. There are two options to get the count: SELECT id FROM Table1 WHERE dt >= startDt AND dt < endDt;; I get a list of ids from above call in cache and then I get count by List.Count. Another option is SELECT COUNT(*) FROM Table1 WHERE dt >= startDt AND dt < endDt; The above call will get the count directly. The issue is that I had several cases of exceptions with the second method: timeout. What I found is that the table1 is too big with millions of data. When I used the first option, it seems OK. I am confused by the fact that Count() takes more time than getting all the rows(is that true?). Not sure if the aggregation call with Count() would cause SQL server to create temporary table or cache on server side and it would result in slow performance when table is too big? I am not sure what is the best way to get the count?

    Read the article

  • XML: what processing rules apply for values intertwined with tags?

    - by iCE-9
    I've started working on a simple XML pull-parser, and as I've just defuzzed my mind on what's correct syntax in XML with regards to certain characters/sequences, ignorable whitespace and such (thank you, http://www.w3schools.com/xml/xml_elements.asp), I realized that I still don't know squat about what can be sketched up as the following case (which Validome finds well-formed very much; note that I only want to use xml files for data storage, no entities, DTD or Schemas needed): <bookstore> <book id="1"> <author>Kurt Vonnegut Jr.</author> <title>Slapstick</title> </book> We drop a pie here. <book id="2">Who cares anyway? <author>Stephen King</author> <title>The Green Mile</title> </book> And another one here. <book id="3"> <author>Next one</author> <title>This time with its own title</title> </book> </bookstore> "We drop a pie here." and "And another one here." are values of the 'bookstore' element. "Who cares anyway?" is a value related to the second 'book' element. How are these processed, if at all? Will "We drop a pie here." and "Another one here." be concatenated to form one value for the 'bookstore' element, or are they treated separately, stored somewhere, affecting the outcome of the parsing of the element they belong to, or...?

    Read the article

  • Selecting a good SQL Server 2008 spatial index with large polygons

    - by andynormancx
    I'm having some fun trying to pick a decent SQL Server 2008 spatial index setup for a data set I am dealing with. The dataset is polygons, representing contours over the whole globe. There are 106,000 rows in the table, the polygons are stored in a geometry field. The issue I have is that many of the polygons cover a large portion of the globe. This seems to make it very hard to get a spatial index that will eliminate many rows in the primary filter. For example, look at the following query: SELECT "ID","CODE","geom".STAsBinary() as "geom" FROM "dbo"."ContA" WHERE "geom".Filter( geometry::STGeomFromText('POLYGON ((-142.03193662573682 59.53396984952896, -142.03193662573682 59.88928136451884, -141.32743833481925 59.88928136451884, -141.32743833481925 59.53396984952896, -142.03193662573682 59.53396984952896))', 4326) ) = 1 This is querying an area which intersects with only two of the polygons in the table. No matter what combination of spatial index settings I chose, that Filter() always returns around 60,000 rows. Replacing Filter() with STIntersects() of course returns just the two polygons I want, but of course takes much longer (Filter() is 6 seconds, STIntersects() is 12 seconds). Can anyone give me any hints on whether there is a spatial index setup that is likely to improve on 60,000 rows or is my dataset just not a good match for SQL Server's spatial indexing ?

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • DataGridView not displaying a row after it is created

    - by joslinm
    Hi, I'm using Visual Studio 10 and I just created a Database using SQL Server CE. Within it, I made a table CSLDataTable and that automatically created a CSLDataSet & CSLDataTableTableAdapter. The three variables were automatically created in my MainWindow.cs class: cSLDataSet cSLDataTableTableAdapter cSLDataTableBindingSource I have added a DataGridView in my Form called dataGridView and datasource cSLDataTableBindingSource. In my MainWindow(), I tried adding a row as a test: public MainWindow() { InitializeComponent(); CSLDataSet.CSLDataTableRow row = cSLDataSet.CSLDataTable.NewCSLDataTableRow(); row.File_ = "file"; row.Artist = "artist11"; row.Album = "album"; row.Save_Structure = "save"; row.Sent = false; row.Error = true; row.Release_Format = "release"; row.Bit_Rate = "bitrate.."; row.Year = "year"; row.Physical_Format = "format"; row.Bit_Format = "bitformat"; row.File_Path = "File!!path"; row.Site_Origin = "what"; cSLDataSet.CSLDataTable.Rows.Add(row); cSLDataSet.AcceptChanges(); cSLDataTableTableAdapter.Fill(cSLDataSet.CSLDataTable); cSLDataTableTableAdapter.Update(cSLDataSet); dataGridView.Refresh(); dataGridView.Update(); } In regards to the DataSet methods I tried calling, I had been trying to find a "correct" way to interact with the adapter, dataset, and datatable to successfully show the row, but to no avail. I'm rather new to using SQL Server CE Database, and read a lot of the MSDN sites & thought I was on the right track, but I've had no luck. The DataGridView shows the headers correctly, but that new row does not show up.

    Read the article

  • Optimal two variable linear regression SQL statement (censoring outliers)

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here (with five outliers highlighted): Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • SQL command to get field of a maximum value, without making two select

    - by António Capelo
    I'm starting to learn SQL and I'm working on this exercise: I have a "books" table which holds the info on every book (including price and genre ID). I need to get the name of the genre which has the highest average price. I suppose that I first need to group the prices by genre and then retrieve the name of the highest.. I know that I can get the results GENRE VS COST with the following: select b.genre, round(avg(b.price),2) as cost from books b group by b.genre; My question is, to get the genre with the highest AVG price from that result, do I have to make: select aux.genre from ( select b.genre, round(avg(b.price),2) as cost from books b group by b.genre ) aux where aux.cost = (select max(aux.cost) from ( select b.genre, round(avg(b.price),2) as cost from books l group by b.genre ) aux); Is it bad practice or isn't there another way? I get the correct result but I'm not confortable with creating two times the same selection. I'm not using PL SQL so I can't use variables or anything like that.. Any help will be appreciated. Thanks in advance!

    Read the article

  • Sql Exception: Error converting data type numeric to numeric

    - by Lucifer
    Hello We have a very strange issue with a database that has been moved from staging to production. The first time the database was moved it was by detaching, copying and reattaching, the second time we tried restoring from a backup of the staging. Both SQL Servers are the same version of MS SQL 2008, running on 64 bit hardware. The code accessing the database is the same build, built using the .net 2.0 framework. Here is the error message and some of the stack trace: Exception Details: System.Data.SqlClient.SqlException: Error converting data type numeric to numeric. Stack Trace: [SqlException (0x80131904): Error converting data type numeric to numeric.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1953274 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4849707 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +204 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +954 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) +175 System.Data.SqlClient.SqlCommand.ExecuteNonQuery() +137 Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016

    Read the article

  • Need advice on comparing the performance of 2 equivalent linq to sql queries

    - by uvita
    I am working on tool to optimize linq to sql queries. Basically it intercepts the linq execution pipeline and makes some optimizations like for example removing a redundant join from a query. Of course, there is an overhead in the execution time before the query gets executed in the dbms, but then, the query should be processed faster. I don't want to use a sql profiler because I know that the generated query will be perform better in the dbms than the original one, I am looking for a correct way of measuring the global time between the creation of the query in linq and the end of its execution. Currently, I am using the Stopwatch class and my code looks something like this: var sw = new Stopwatch(); sw.Start(); const int amount = 100; for (var i = 0; i < amount; i++) { ExecuteNonOptimizedQuery(); } sw.Stop(); Console.Writeline("Executing the query {2} times took: {0}ms. On average, each query took: {1}ms", sw.ElapsedMilliseconds, sw.ElapsedMilliseconds / amount, amount); Basically the ExecutenNonOptimizedQuery() method creates a new DataContext, creates a query and then iterates over the results. I did this for both versions of the query, the normal one and the optimized one. I took the idea from this post from Frans Bouma. Is there any other approach/considerations I should take? Thanks in advance!

    Read the article

  • Excel and SQL, order by help

    - by perlnoob
    Im stuck in Excel 2007, running a query, it worked until I wanted to add a 2nd row containing "field 2". Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Chicago') Union all Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Denver') Order By "Site Location" ASC; Basically I want 2 different cells for the locations, example name - Chicago - denver user1 - 100 - 20 user2 - 34 - 1002 Right now for some odd reason, its combining it like: name - chicago user1 - 120 user2 - 1036 Please note updating to 2010 beta is not a viable option for me at this point. Any and all input that will help me is greatly apprecaited. I have read over http://www.techonthenet.com/sql/order_by.php however its not gotten me very far in this question. If you have another SQL resource you recomend for people trying to get their feet wet, I'd greatly apprecaite it. If it helps all the info is on the same table.

    Read the article

  • Optimal two variable linear regression SQL statement

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 5 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; and insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • How to convert source code to a xml based representation of the ast?

    - by autobiographer
    i wanna get a xml representation of the ast of java and c code. 3 months ago, i asked this question yet but the solutions weren't comfortable for me srcml seems to be a good solution for this problem but it does not support line numbers and columns but i need that feature. about elsa: cite: "There is ongoing effort to export the Elsa AST as an XML document; we expect to be able to advertise this in the next public release." dms... didn't understand that. especially for java, there is javaml which supports line numbers. but the sourceforge page doesn't list any files. question: there's software available which supports conversion of ast into xml which supports line numbers (and columns) [especially for java and c/c++]? is there an alternative to javaml and srcml? ps: i don't wanne have parser generators. i hope to find a tool which can be used on the console typing: ./my-xml-generator Test.java [or something like that]... or a java implementation would be great too.

    Read the article

  • Should I write more SQL to be more efficient, or less SQL to be less buggy?

    - by RenderIn
    I've been writing a lot of one-off SQL queries to return exactly what a certain page needs and no more. I could reuse existing queries and issue a number of SQL requests linear to the number of records on the page. As an example, I have a query to return People and a query to return Job Details for a person. To return a list of people with their job details I could query once for people and then once for each person to retrieve their job details. I've found that in most cases that solution returns things in a reasonable amount of time, but I don't know how well it will scale in my environment. Instead I've been writing queries to join people + job details, or people + salary history, etc. I'm looking at my models and I see how I could shave off maybe 30% of my code if I were to re-use existing queries. This is a big temptation. Is it a bad thing to go for reuse over efficiency in general or does it all come down to the specific situation? Should I first do it the easy way and then optimize later, or is it best to get the code knocked out while everything is fresh in my mind? Thoughts, experiences?

    Read the article

  • Calculate differences between rows while grouping with SQL

    - by Guido
    I have a postgresql table containing movements of different items (models) between warehouses. For example, the following record means that 5 units of model 1 have been sent form warehouse 1 to 2: source target model units ------ ------ ----- ----- 1 2 1 5 I am trying to build a SQL query to obtain the difference between units sent and received, grouped by models. Again with an example: source target model units ------ ------ ----- ----- 1 2 1 5 -- 5 sent from 1 to 2 1 2 2 1 2 1 1 2 -- 2 sent from 2 to 1 2 1 1 1 -- 1 more sent from 2 to 1 The result should be: source target model diff ------ ------ ----- ---- 1 2 1 2 -- 5 sent minus 3 received 1 2 2 1 I wonder if this is possible with a single SQL query. Here is the table creation script and some data, just in case anyone wants to try it: CREATE TEMP TABLE movements ( source INTEGER, target INTEGER, model INTEGER, units INTEGER ); insert into movements values (1,2,1,5); insert into movements values (1,2,2,1); insert into movements values (2,1,1,2); insert into movements values (2,1,1,1);

    Read the article

  • How do I setup Linq to SQL and WCF

    - by Jisaak
    So I'm venturing out into the world of Linq and WCF web services and I can't seem to make the magic happen. I have a VERY basic WCF web service going and I can get my old SqlConnection calls to work and return a DataSet. But I can't/don't know how to get the Linq to SQL queries to work. I'm guessing it might be a permissions problem since I need to connect to the SQL Database with a specific set of credentials but I don't know how I can test if that is the issue. I've tried using both of these connection strings and neither seem to give me a different result. <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;User ID=domain\userName; Password=blahblah; Trusted_Connection=true" providerName="System.Data.SqlClient" /> Here is the function in my service that does the query and I have the interface add the [OperationContract] public string GetCity(int cityId) { GeoDataContext db = new GeoDataContext(); var city = from c in db.Cities where c.CITY_ID == 30429 select c.DESCRIPTION; return city.ToString(); } The GeoData.dbml only has one simple table in it with a list of city id's and city names. I have also changed the "Serialization Mode" on the DataContext to "Unidirectional" which from what I've read needs to be done for WCF. When I run the service I get this as the return: SELECT [t0].[DESCRIPTION] FROM [dbo].[Cities] AS [t0] WHERE [t0].[CITY_ID] = @p0 Dang, so as I'm writing this I realize that maybe my query is all messed up?

    Read the article

  • injection attack (I thought I was protected!) <?php /**/eval(base64_decode( everywhere

    - by Cyprus106
    I've got a fully custom PHP site with a lot of database calls. I just got injection hacked. This little chunk of code below showed up in dozens of my PHP pages. <?php /**/ eval(base64_decode(big string of code.... I've been pretty careful about my SQL calls and such; they're all in this format: $query = sprintf("UPDATE Sales SET `Shipped`='1', `Tracking_Number`='%s' WHERE ID='%s' LIMIT 1 ;", mysql_real_escape_string($trackNo), mysql_real_escape_string($id)); $result = mysql_query($query); mysql_close(); For the record, I rarely use mysql_close() at the end though. That just happened to be the code I grabbed. I can't think of any places where I don't use mysql_real_escape_string(), (although I'm sure there's probably a couple. I'll be grepping soon to find out) There's also no places where users can put in custom HTML or anything. In fact, most of the user-accessible pages, if they use SQL calls at all, are almost inevitably "SELECT * FROM" pages that use a GET or POST, depending. Obviously I need to beef up my security, but I've never had an attack like this and I'm not positive what I should do. I've decided to put limits on all my inputs and go through looking to see if i missed a mysql_real_escape_string somewhere... Anybody else have any suggestions? Also... what does this type of code do? Why is it there?

    Read the article

  • Linq to sql Repository pattern , Some doubts

    - by MindlessProgrammer
    I am using repository pattern with linq to sql, I am using a repository class per table. I want to know , am i doing at good/standard way, ContactRepository Contact GetByID() Contact GetAll() COntactTagRepository List<ContactTag> Get(long contactID) List<ContactTag> GetAll() List<ContactTagDetail> GetAllDetails() class ContactTagDetail { public Contact Contact {get;set;} public ContactTag COntactTag {get;set;} } When i need a contact i call method in contactrepository, same for contacttag but when i need contact and tags together i call GetDetais() in ContactTag repository its not returning the COntactTag entity generated by the orm insted its returning ContactTagDetail entity conatining both COntact and COntactTag generated by the orm, i know i can simple call GetAll in COntactTag repository and can access Contact.ContactTag but as its linq to sql it will there is no option to Deferred Load in query level, so whenever i need a entity with a related entity i create a projection class Another doubt is where i really need to right the method i can do it in both contact & ContactTag repostitory like In contact repository GetALlWithTags() or something but i am doing it in in COntactTag repository Whats your suggestions ?

    Read the article

  • SQL to CodeIgniter Array Missing Data Issue

    - by SamD
    $query = $this->db->query("SELECT t1.numberofbets, t1.profit, t2.seven_profit, t3.28profit, user.user_id, username, password, email, balance, user.date_added, activation_code, activated FROM user LEFT JOIN (SELECT user_id, SUM(amount_won) AS profit, count(tip_id) AS numberofbets FROM tip GROUP BY user_id) as t1 ON user.user_id = t1.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS seven_profit FROM tip WHERE date_settled > '$seven_daystime' GROUP BY user_id) as t2 ON user.user_id = t2.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS 28profit FROM tip WHERE date_settled > '$twoeight_daystime' GROUP BY user_id) as t3 ON user.user_id = t3.user_id where activated = 1 GROUP BY user.user_id ORDER BY user.date_added DESC"); return $query->result_array(); The query works fine running it in phpMyAdmin and returns complete results (in image attached). However, printing the array in CodeIgniter, it has no value for one field ,seven_profit, where it is there in the SQL query ran in phpMyAdmin, just the discrepancy in this one field, from sql to php array... I just can’t see why, when printing the array, that one field, which should have value of 26, contains nothing? Any ideas? I changed the field name from starting with a number in attempt to fix it, but no difference. I know this is complex and looks horrible, any help or just people coming across something similar would be great to know about, thanks. Sam

    Read the article

  • Is there a way to parse XML via SAX/DOM with line numbers available per node.

    - by Chris
    I already have written a DOM parser for a large XML document format that contains a number of items that can be used to automatically generate Java code. This is limited to small expressions that are then merged into a dynamically generated Java source file. So far - so good. Everything works. BUT - I wish to be able to embed the line number of the XML node where the Java code was included from (so that if the configuration contains uncompilable code, each method will have a pointer to the source XML document and the line number for ease of debugging). I don't require the line number at parse-time and I don't need to validate the XML Source Document and throw an error at a particular line number. I need to be able to access the line number for each node and attribute in my DOM or per SAX event. Any suggestions on how I might be able to achieve this? P.S. Also, I read the StAX has a method to obtain line number whilst parsing, but ideally I would like to achieve the same result with regular SAX/DOM processing in Java 4/5 rather than become a Java 6+ application or take on extra .jar files.

    Read the article

  • How to use Hibernate SchemaUpdate class with a JPA persistence.xml?

    - by John Rizzo
    I've a main method using SchemaUpdate to display at the console what tables to alter/create and it works fine in my Hibernate project: public static void main(String[] args) throws IOException { //first we prepare the configuration Properties hibProps = new Properties(); hibProps.load(Thread.currentThread().getContextClassLoader().getResourceAsStream("jbbconfigs.properties")); Configuration cfg = new AnnotationConfiguration(); cfg.configure("/hibernate.cfg.xml").addProperties(hibProps); //We create the SchemaUpdate thanks to the configs SchemaUpdate schemaUpdate = new SchemaUpdate(cfg); //The update is executed in script mode only schemaUpdate.execute(true, false); ... I'd like to reuse this code in a JPA project, having no hibernate.cfg.xml file (and no .properties file), but a persistence.xml file (autodetected in the META-INF directory as specified by the JPA spec). I tried this too simple adaptation, Configuration cfg = new AnnotationConfiguration(); cfg.configure(); but it failed with that exception. Exception in thread "main" org.hibernate.HibernateException: /hibernate.cfg.xml not found Has anybody done that? Thanks.

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • sql statement supposed to have 2 distinct rows, but only 1 is returned

    - by jello
    I have an sql statement that is supposed to return 2 rows. the first with psychological_id = 1, and the second, psychological_id = 2. here is the sql statement select * from psychological where patient_id = 12 and symptom = 'delire'; But with this code, with which I populate an array list with what is supposed to be 2 different rows, two rows exist, but with the same values: the second row. OneSymptomClass oneSymp = new OneSymptomClass(); ArrayList oneSympAll = new ArrayList(); string connStrArrayList = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; string queryStrArrayList = "select * from psychological where patient_id = " + patientID.patient_id + " and symptom = '" + SymptomComboBoxes[tag].SelectedItem + "';"; using (var conn = new SqlConnection(connStrArrayList)) using (var cmd = new SqlCommand(queryStrArrayList, conn)) { conn.Open(); using (SqlDataReader rdr = cmd.ExecuteReader()) { while (rdr.Read()) { oneSymp.psychological_id = Convert.ToInt32(rdr["psychological_id"]); oneSymp.patient_history_date_psy = (DateTime)rdr["patient_history_date_psy"]; oneSymp.strength = Convert.ToInt32(rdr["strength"]); oneSymp.psy_start_date = (DateTime)rdr["psy_start_date"]; oneSymp.psy_end_date = (DateTime)rdr["psy_end_date"]; oneSympAll.Add(oneSymp); } } conn.Close(); } OneSymptomClass testSymp = oneSympAll[0] as OneSymptomClass; MessageBox.Show(testSymp.psychological_id.ToString()); the message box outputs "2", while it's supposed to output "1". anyone got an idea what's going on?

    Read the article

  • How to generate XML from an Excel VBA macro?

    - by SuperNES
    So, I've got a bunch of content that was delivered to us in the form of Excel spreadsheets. I need to take that content and push it into another system. The other system takes its input from an XML file. I could do all of this by hand (and trust me, management has no problem making me do that!), but I'm hoping there's an easy way to write an Excel macro that would generate the XML I need instead. This seems like a better solution to me, as this is a job that will need to be repeated regularly (we'll be getting a LOT of content in Excel sheets) and it just makes sense to have a batch tool that does it for us. However, I've never experimented with generating XML from Excel spreadsheets before. I have a little VBA knowledge but I'm a newbie to XML. I guess my problem in Googling this is that I don't even know what to Google for. Can anyone give me a little direction to get me started? Does my idea sound like the right way to approach this problem, or am I overlooking something obvious? Thanks StackOverflow!

    Read the article

  • SQL Server 2008 spatial index and CPU utilization with MapGuide Open Source 2.1

    - by Antonio de la Peña
    I have a SQL Server table with hundreds of thousands of geometry type parcels. I have made indexes on them trying different combinations of density and objects per cell settings. So far I'm settiling for LOW, LOW, MEDIUM, MEDIUM and 16 objects per cell and I made a SP that sets the bounding box according to the extents of the entities in the table. There is an incredible performance boost from queries taking almost minutes without index to less than seconds, it gets faster when the zoom is closer thus less objects are displayed. Yet the CPU utilization gets to 100% when querying for features, even when the queries themselves are fast. I'm worrying this will not fly in a production environment. I am using MapGuide Open Source 2.1 for this project, but I am positive the CPU load is caused by SQL Server. I wonder if my indexes are set properly. I haven't found any clear documentation on how to properly set them up. Every article I've read basically says "it depends..." but nothing specific. Do you have any recommendations for me, including books, articles? Thank you.

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >