Search Results

Search found 31588 results on 1264 pages for 'linq to sql designer'.

Page 363/1264 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • delete row from result set in web sql with javascript

    - by Kaijin
    I understand that the result set from web sql isn't quite an array, more of an object? I'm cycling through a result set and to speed things up I'd like to remove a row once it's been found. I've tried "delete" and "splice", the former does nothing and the latter throws an error. Here's a piece of what I'm trying to do, notice the delete on line 18: function selectFromReverse(reverseRay,suggRay){ var reverseString = reverseRay.toString(); db.transaction(function (tx) { tx.executeSql('SELECT votecount, comboid FROM counterCombos WHERE comboid IN ('+reverseString+') AND votecount>0', [], function(tx, results){ processSelectFromReverse(results,suggRay); }); }, function(){onError}); } function processSelectFromReverse(results,suggRay){ var i = suggRay.length; while(i--){ var j = results.rows.length; while(j--){ console.log('searching'); var found = 0; if(suggRay[i].reverse == results.rows.item(j).comboid){ delete results.rows.item(j); console.log('found'); found++; break; } } if(found == 0){ console.log('lost'); } } }

    Read the article

  • Getting the first of a GROUP BY clause in SQL

    - by Michael Bleigh
    I'm trying to implement single-column regionalization for a Rails application and I'm running into some major headaches with a complex SQL need. For this system, a region can be represented by a country code (e.g. us) a continent code that is uppercase (e.g. NA) or it can be NULL indicating the "default" information. I need to group these items by some relevant information such as a foreign key (we'll call it external_id). Given a country and its continent, I need to be able to select only the most specific region available. So if records exist with the country code, I select them. If, not I want a records with the continent code. If not that, I want records with a NULL code so I can receive the default values. So far I've figured that I may be able to use a generated CASE statement to get an arbitrary sort order. Something like this: SELECT *, CASE region WHEN 'us' THEN 1 WHEN 'NA' THEN 2 ELSE 3 END AS region_sort FROM my_table WHERE region IN ('us','NA') OR region IS NULL GROUP BY external_id ORDER BY region_sort The problem is that without an aggregate function the actual data returned by the GROUP BY for a given row seems to be untameable. How can I massage this query to make it return only the first record of the region_sort ordered groups?

    Read the article

  • Saving multiple items per single database cell...

    - by eugeneK
    Hi, i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose. Question is what would be the most efficient approach to this problem... I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow.. If you have better solution or some tips i would be glad to hear. Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better. technology, MSSQL and ASP.NET thanks

    Read the article

  • SQL Server CE rollback does not undo delete.

    - by INTPnerd
    I am using SQL Server CE 3.5 and C# with the .NET Compact Framework 3.5. In my code I am inserting a row, then starting a transaction, then deleting that row from a table, and then doing a rollback on that transaction. But this does not undo the deletion. Why not? Here is my code: SqlCeConnection conn = ConnectionSingleton.Instance; conn.Open(); UsersTable table = new UsersTable(); table.DeleteAll(); MessageBox.Show("user count in beginning after delete: " + table.CountAll()); table.Insert( new User(){Id = 0, IsManager = true, Pwd = "1234", Username = "Me"}); MessageBox.Show("user count after insert: " + table.CountAll()); SqlCeTransaction transaction = conn.BeginTransaction(); table.DeleteAll(); transaction.Rollback(); transaction.Dispose(); MessageBox.Show("user count after rollback delete all: " + table.CountAll()); The messages indicate that everything works as expected until the very end where the table has a count of 0 indicating the rollback did not undo the deletion.

    Read the article

  • Returning data from SQL Server reporting web service call

    - by user79339
    Hi, I am generating a report that contains the version number. The version number is stored in the DB and retrieved/incremented as part of the report generation. The only problem is, I am calling SSRS via a web service call, which returns the generated report as a byte array. Is there any way to get the version number out of this generated report? For example to display a dialog that says "You generated Status Report, Version number 3". I tried passing in an output parameter and setting it inside the storedproc. Its modified when i execute it in sql management studio, but not in the reporting studio. Or atleast i can't seem to bind to the modified, post execution value (using expression "=Parameters!ReportVersion.Value"). Of course, I could get/increment the version number from database myself before calling the SSRS webservice and pass it along as a parameter to the Report, but that might cause concurrancy problems. On the whole, it just seems neater for the storedproc to access/generate a version number and return it to the ReportingEngine, which will generate the report with the version number and return the updated version number to the WebService client. Any thoughts/Ideas?

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • Self referencing update SQL statement for Informix

    - by CheeseConQueso
    Need some Informix SQL... Courses get a regular grade, but their associated labs get a grade of 'LAB'. I need to update the table so that the lab grade matches the course grade. Also, if there is no corresponding course for a lab, it means the course was canceled. In that case, I want to place a flag value of 'X' for its grade. Example data before update: id yr sess crs_no hrs grd 725 2009 FA COLL101 3.000000000000 C 725 2009 FA ENGL021 3.000000000000 FI 725 2009 FA ENGL021L 1.000000000000 LAB 725 2009 FA ENGL031 3.000000000000 FNI 725 2009 FA ENGL031L 1.000000000000 LAB 725 2009 FA MATH010 3.000000000000 FNI 725 2010 SP AOTE101 3.000000000000 C 725 2010 SP ENGL021L 1.000000000000 LAB 725 2010 SP ENGL031 3.000000000000 FI 725 2010 SP ENGL031L 1.000000000000 LAB 725 2010 SP MATH010 3.000000000000 FNI 726 2010 SP SPAN101 3.000000000000 FN Example data after update: id yr sess crs_no hrs grd 725 2009 FA COLL101 3.000000000000 C 725 2009 FA ENGL021 3.000000000000 FI 725 2009 FA ENGL021L 1.000000000000 FI 725 2009 FA ENGL031 3.000000000000 FNI 725 2009 FA ENGL031L 1.000000000000 FNI 725 2009 FA MATH010 3.000000000000 FNI 725 2010 SP AOTE101 3.000000000000 C 725 2010 SP ENGL021L 1.000000000000 X 725 2010 SP ENGL031 3.000000000000 FI 725 2010 SP ENGL031L 1.000000000000 FI 725 2010 SP MATH010 3.000000000000 FNI 726 2010 SP SPAN101 3.000000000000 FN I worked out a solution for this, but it required a lot of on-the-fly composite foreign keys built from concatenating the id, yr, sess, and substring'd crs_no. My solution is not only overkill, but it has gaps in it and it takes too long to process. I know there is an easier way to do this, but I've gone so far down one road that I am having trouble thinking of a different approach.

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • collation conflict SQL/SERVER 2008

    - by vikitor
    Hello, I've been going around this but I haven't found a solution for my problem. My sql query is: SELECT dbo.Country.CtyRecID, dbo.Country.CtyShort, dbo.Notification.NotRecID, dbo.Notification.NotName, dbo.TemporalSuspension.TCtsCode, dbo.TemporalSuspension.TCtsCodeRecID, dbo.TaxPhylum.PhyName AS Taxon, dbo.TemporalSuspension.TCtsNotes, dbo.TemporalSuspension.TCtsRecID, dbo.TemporalSuspension.TCtsKgmRecID, CASE dbo.TemporalSuspension.TCtsKgmRecID WHEN 1 THEN 'Animals' WHEN 2 THEN 'Plants' ELSE 'All' END AS Kingdom FROM dbo.TemporalSuspension INNER JOIN dbo.Notification ON dbo.TemporalSuspension.TCtsStartNotRecID = dbo.Notification.NotRecID INNER JOIN dbo.Country ON dbo.TemporalSuspension.TCtsCtyRecID = dbo.Country.CtyRecID INNER JOIN dbo.TaxPhylum ON dbo.TemporalSuspension.TCtsCodeRecID = dbo.TaxPhylum.PhyRecID AND dbo.TemporalSuspension.TCtsCode LIKE 'PHY' UNION ALL SELECT dbo.Country.CtyRecID, dbo.Country.CtyShort, dbo.Notification.NotRecID, dbo.Notification.NotName, dbo.TemporalSuspension.TCtsCode, dbo.TemporalSuspension.TCtsCodeRecID, dbo.TaxClass.ClaName AS Taxon, dbo.TemporalSuspension.TCtsNotes, dbo.TemporalSuspension.TCtsRecID, dbo.TemporalSuspension.TCtsKgmRecID, CASE dbo.TemporalSuspension.TCtsKgmRecID WHEN 1 THEN 'Animals' WHEN 2 THEN 'Plants' ELSE 'All' END AS Kingdom FROM dbo.TemporalSuspension INNER JOIN dbo.Notification ON dbo.TemporalSuspension.TCtsStartNotRecID = dbo.Notification.NotRecID INNER JOIN dbo.Country ON dbo.TemporalSuspension.TCtsCtyRecID = dbo.Country.CtyRecID INNER JOIN dbo.TaxClass ON dbo.TemporalSuspension.TCtsCodeRecID = dbo.TaxClass.ClaRecID AND dbo.TemporalSuspension.TCtsCode LIKE 'CLA' But I don't understand why it doesn't work, I keep getting this error : Cannot resolve collation conflict for column 7 in SELECT statement. What's wrong? I've used this other times and I never got this problem. thanks

    Read the article

  • SQL connection to database repeating

    - by user175084
    ok now i am using the SQL database to get the values from different tables... so i make the connection and get the values like this: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); SqlCommand sqlCmd = new SqlCommand("SELECT * FROM Machines", connection); SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd); sqlCmd.Parameters.AddWithValue("@node", node); sqlDa.Fill(dt); connection.Close(); so this is one query on the page and i am calling many other queries on the page. So do i need to open and close the connection everytime...??? also if not this portion is common in all: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); can i like put it in one function and call it instead.. the code would look cleaner... i tried doing that but i get errors like: Connection does not exist in the current context. any suggestions??? thanks

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • SQL query to calculate running group counts on time-phased data

    - by spong
    I have some data, like this: BUG DATE STATUS ---- ---------------------- -------- 9012 18/03/2008 9:08:44 AM OPEN 9012 18/03/2008 9:10:03 AM OPEN 9012 28/03/2008 4:55:03 PM RESOLVED 9012 28/03/2008 5:25:00 PM CLOSED 9013 18/03/2008 9:12:59 AM OPEN 9013 18/03/2008 9:15:06 AM RESOLVED 9013 18/03/2008 9:16:44 AM CLOSED 9014 18/03/2008 9:17:54 AM OPEN 9014 18/03/2008 9:18:31 AM RESOLVED 9014 18/03/2008 9:19:30 AM CLOSED 9015 18/03/2008 9:22:40 AM OPEN 9015 18/03/2008 9:23:03 AM RESOLVED 9015 19/03/2008 12:27:08 PM CLOSED 9016 18/03/2008 9:24:20 AM OPEN 9016 18/03/2008 9:24:35 AM RESOLVED 9016 19/03/2008 12:28:14 PM CLOSED 9017 18/03/2008 9:25:47 AM OPEN 9017 18/03/2008 9:26:02 AM RESOLVED 9017 19/03/2008 12:30:30 PM CLOSED Which I would like to transform into something like this: DATE OPEN RESOLVED CLOSED ---------------------- -------- -------- -------- 18/03/2008 9:08:44 AM 1 0 0 18/03/2008 9:12:59 AM 2 0 0 18/03/2008 9:15:06 AM 1 1 0 18/03/2008 9:16:44 AM 1 0 1 18/03/2008 9:17:54 AM 2 0 1 18/03/2008 9:18:31 AM 1 1 0 18/03/2008 9:19:30 AM 1 0 2 18/03/2008 9:22:40 AM 2 0 2 18/03/2008 9:23:03 AM 1 1 2 18/03/2008 9:24:20 AM 2 1 2 18/03/2008 9:24:35 AM 1 2 2 18/03/2008 9:25:47 AM 2 2 2 18/03/2008 9:26:02 AM 1 3 2 19/03/2008 12:27:08 PM 1 2 3 19/03/2008 12:28:14 PM 1 1 4 19/03/2008 12:30:30 PM 1 0 5 28/03/2008 4:55:03 PM 0 1 5 28/03/2008 5:25:00 PM 0 0 6 i.e. keeping running counts of bugs with each status. This is easy enough to code up using cursors, but I'm wondering if any of you SQL gurus out there can help with a query to achieve this? Ideally for mysql, but I'm curious to see anything that will work.

    Read the article

  • Saving ntext data from SQL Server to file directory using asp

    - by April
    A variety of files (pdf, images, etc.) are stored in a ntext field on a MS SQL Server. I am not sure what type is in this field, other than it shows question marks and undefined characters, I am assuming they are binary type. The script is supposed to iterate through the rows and extract and save these files to a temp directory. "filename" and "contenttype" are given, and "data" is whatever is in the ntext field. I have tried several solutions: 1) data.SaveToFile "/temp/"&filename, 2 Error: Object required: '????????????????????' ??? 2) File.WriteAllBytes "/temp/"&filename, data Error: Object required: 'File' I have no idea how to import this, or the Server for MapPath. (Cue: what a noob!) 3) Const adTypeBinary = 1 Const adSaveCreateOverWrite = 2 Dim BinaryStream Set BinaryStream = CreateObject("ADODB.Stream") BinaryStream.Type = adTypeBinary BinaryStream.Open BinaryStream.Write data BinaryStream.SaveToFile "C:\temp\" & filename, adSaveCreateOverWrite Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another. 4) Response.ContentType = contenttype Response.AddHeader "content-disposition","attachment;" & filename Response.BinaryWrite data response.end This works, but the file should be saving to the server instead of popping up save-as dialog. I am not sure if there is a way to save the response to file. Thanks for shedding light on any of these problems!

    Read the article

  • Insert or Update using Oracle and PL/SQL

    - by Shane
    I have a PL/SQL function that performs an update/insert on an Oracle database that maintains a target total and returns the difference between the existing value and the new value. Here is the code I have so far: FUNCTION calcTargetTotal(accountId varchar2, newTotal numeric ) RETURN number is oldTotal numeric(20,6); difference numeric(20,6); begin difference := 0; begin select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; exception when NO_DATA_FOUND then begin difference := newTotal; insert into target_total ( account_id, value ) values ( accountId, newTotal ); -- sometimes a race condition occurs and this stmt fails -- in those cases try to update again exception when DUP_VAL_ON_INDEX then begin difference := 0; select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; end; end; end; return difference end calcTargetTotal; This works as expected in unit tests with multiple threads never failing. However when loaded on a live system we have seen this fail with a stack trace looking like this: ORA-01403: no data found ORA-00001: unique constraint () violated ORA-01403: no data found The line numbers (which I have removed since they are meaningless out of context) verify that the first update fails due to no data, the insert fail due to uniqueness, and the 2nd update is failing with no data, which should be impossible. From what I have read on other thread a MERGE statement is also not atomic and could suffer similar problems. Does anyone have any ideas how to prevent this from occurring?

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • Converting rows to Columns in SQL

    - by Ram
    I have a table (actually a view, but simplified my example to a table) which gives me some data like this id CompanyName website 1 Google google.com 2 Google google.net 3 Google google.org 4 Google google.in 5 Google google.de 6 Microsoft Microsoft.com 7 Microsoft live.com 8 Microsoft bing.com 9 Microsoft hotmail.com I am looking to convert it to get a result like this CompanyName website1 website2 website3 website 4 website5 website6 ----------- ------------- ---------- ---------- ----------- --------- -------- Google google.com google.net google.org google.in google.de NULL Microsoft Microsoft.com live.com bing.com hotmail.com NULL NULL I have looked into pivot but looks like the record(row values) cannot be dynamic (i.e can only be certain predefined values). Also, if there are more than 6 websites, I want to limit it to the first 6 Dynamic pivot makes sense, but I would have to incorporate it into my view ?? Is there a simpler solution for this ? Here are the SQL scripts CREATE TABLE [dbo].[Company]( [id] [int] NULL, [CompanyName] [varchar](50) NULL, [website] [varchar](50) NULL ) ON [PRIMARY] GO insert into company values (1,'Google','google.com') insert into company values (2,'Google','google.net') insert into company values (3,'Google','google.org') insert into company values (4,'Google','google.in') insert into company values (5,'Google','google.de') insert into company values (6,'Microsoft','Microsoft.com') insert into company values (7,'Microsoft','live.com') insert into company values (8,'Microsoft','bing.com') insert into company values (9,'Microsoft','hotmail.com')

    Read the article

  • sql server - framework 4 - IIS 7 weird sort from db to page

    - by ila
    I am experiencing a strange behavior when reading a resultset from database in a calling method. The sort of the rows is different from what the database should return. My farm: - database server: sql server 2008 on a WinServer 2008 64 bit - web server: a couple of load balanced WinServer 2008 64 bit running IIS 7 The application runs on a v4.0 app pool, set to enable 32bit applications Here's a description of the problem: - a stored procedure is called, that returns a resultset sorted on a particular column - I can see thru profiler the call to the SP, if I run the statement I see correct sorting - the calling page gets the results, and before any further elaboration logs the rows immediately after the SP execution - the results are in a completely different order (I cannot even understand if they are sorted in any way) Some details on the Stored Procedure: - it is called by code using a SqlDatAdapter - it has also an output value (a count of the rows) that is read correctly - which sort field is to be used is passed as a parameter - makes use of temp tables to collect data and perform the desired sort Any idea on what I could check? Same code and same database work correctly in a test environment, 32 bit and not load balanced.

    Read the article

  • How to write automated tests for SQL queries?

    - by James
    The current system we are adopting at work is to write some extremely complex queries which perform multiple calculations and have multiple joins / sub-queries. I don't think I am experienced enough to say if this is correct or not so I am agreeing and attempting to function with this system as it has clear benefits. The problem we are having at the moment is that the person writing the queries makes a lot of mistakes and assumes everything is correct. We have now assigned a tester to analyse all of the queries but this still proves extremely time consuming and stressful. I would like to know how we could create an automated procedure (without specifically writing it with code if possible as I can work out how to do that the long way) to verify a set of 10+ different inputs, verify the output data and say if the calculations are correct. I know I could write a script using specific data in the database and create a script using c# (the db is SQL Server) and verify all the values coming out but I would like to know what the official "standard" is as my experience is lacking in this area and I would like to improve. I am happy to add more information if required, add a comment if necessary. Thank you. Edit: I am using c#

    Read the article

  • Get dragged / saved items state back from Sql Server

    - by user571507
    Ok i saw many post's on how to serialize the value of dragged items to get hash and they tell how to save them. Now the question is how do i persist the dragged items the next time when user log's in using the has value that i got eg: <ul class="list"> <li id="id_1"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_2"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_3"> <div class="item ui-corner-all ui-widget ui-widget-content"> </div> </li> <li id="id_4"> <div class="item ui-corner-all ui-widget"> </div> </li> </ul> which on serialize will give "id[]=1&id[]=2&id[]=3&id[]=4" Now think that i saved it to Sql server database in a single field called SortOrder. Now how do i get the items to these order again ? the code to make these sort is below,without which people didn't know which library i had used to sort and serialize <script type="text/javascript"> $(document).ready(function() { $(".list li").css("cursor", "move"); $(".list").sortable(); }); </script>

    Read the article

  • Best Method For High Data Availability for SQL Server

    - by omatase
    I have a web service that runs 24/7. Periodically it needs to refresh its database with data from another web service. There is a lot of data. It's tens of thousands of rows. (no, I don't mean this is a lot of data for SQL Server, just trying to point out that I expect it to take some time to come down the pipe from the other web service) The data refresh can take between 5 and 10 minutes. The actual data update portion of that is between 1 and 2 minutes. This means the service would be down for all intents and purposes when consumers would be requesting this type of data. I would like to implement a system where data is always available. The only thing that comes to mind is some type of system where I maintain two separate databases. I populate the inactive one, swapping it to active before populating the other one. I'm not sure I know the best way to do this. My current ideas just revolve around two sets of the schema in a single database (using views to access the active set) or two databases each with the same schema. The application would rotate between the two databases. Any suggestions from someone who has done something like this before?

    Read the article

  • SQL Server 2008 Stored Proc suddenly returns -1

    - by aaginor
    I use the following stored procedure from my SQL Server 2008 database to return a value to my C#-Program ALTER PROCEDURE [dbo].[getArticleBelongsToCatsCount] @id int AS BEGIN SET NOCOUNT ON; DECLARE @result int; set @result = (SELECT COUNT(*) FROM art_in_cat WHERE child_id = @id); return @result; END I use a SQLCommand-Object to call this Stored Procedure public int ExecuteNonQuery() { try { return _command.ExecuteNonQuery(); } catch (Exception e) { Logger.instance.ErrorRoutine(e, "Text: " + _command.CommandText); return -1; } } Till recently, everything works fine. All of a sudden, the stored procedure returned -1. At first, I suspected, that the ExecuteNonQuery-Command would have caused and Exception, but when stepping through the function, it shows that no Exception is thrown and the return value comes directly from return _command.ExecuteNonQuery(); I checked following parameters and they were as expected: - Connection object was set to the correct database with correct access values - the parameter for the SP was there and contained the right type, direction and value Then I checked the SP via SQLManager, I used the same value for the parameter like the one for which my C# brings -1 as result (btw. I checked some more parameter values in my C' program and they ALL returned -1) but in the manager, the SP returns the correct value. It looks like the call from my C# prog is somehow bugged, but as I don't get any error (it's just the -1 from the SP), I have no idea, where to look for a solution.

    Read the article

  • SQL with HAVING and temp table not working in Rails

    - by chrisrbailey
    I can't get the following SQL query to work quite right in Rails. It runs, but it fails to do the "HAVING row_number = 1" part, so I'm getting all the records, instead of just the first record from each group. A quick description of the query: it is finding hotel deals with various criteria, and in particular, priortizing them being paid, and then picking the one with the highest dealrank. So, if there are paid deal(s), it'll take the highest one of those (by dealrank) first, if no paid deals, it takes the highest dealrank unpaid deal for each hotel. Using MAX(dealrank) or something similar does not work as a way to pick off the first row of each hotel group, which is why I have the enclosing temptable and the creation of the row_number column. Here's the query: SELECT *, @num := if(@hid = hotel_id, @num + 1, 1) as row_number, @hid := hotel_id as dummy FROM ( SELECT hotel_deals.*, affiliates.cpc, (CASE when affiliates.cpc 0 then 1 else 0 end) AS paid FROM hotel_deals INNER JOIN hotels ON hotels.id = hotel_deals.hotel_id LEFT OUTER JOIN affiliates ON affiliates.id = hotel_deals.affiliate_id WHERE ((hotel_deals.percent_savings = 0) AND (hotel_deals.booking_deadline = ?)) GROUP BY hotel_deals.hotel_id, paid DESC, hotel_deals.dealrank ASC) temptable HAVING row_number = 1 I'm currently using Rails' find_by_sql to do this, although I've also tried putting it into a regular find using the :select, :from, and :having parts (but :having won't get used unless you have a :group as well). If there is a different way to write this query, that'd be good to know too. I am using Rails 2.3.5, MySQL 5.0.x.

    Read the article

  • convert SQL Server StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of SQL Server To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<>0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >