Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 331/1448 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • What is a good automated data import method for SQL Server?

    - by Joel Potter
    I'm in the process of porting some SQL Server 2005 databases to SQL Server 2008. One of these databases has an associated import application (Windows task) which uses SSIS with a DTS package to import a large dataset from an MS Access database nightly. In upgrading to SQL Server 2008, I discovered that I can't run the same console application which has been performing the imports due to the missing manageddts DLL in SQL Server 2008. It's several years old and in need of a rewrite for various reason, plus, I've been fairly unhappy with DTS in general. The original reason DTS was chosen was for speed (5 min import time compared to 30+ for ADO.NET). The format of the data to import is out of my control (the client likes Access). I would also like to be able to run the import from a machine completely separate from the server hosting SQL Server and preferably with minimal SQL features installed. Options I've considered: Creating an Access application to connect to both databases (SQL Server and Access) and perform the import (Ugh!) Revisiting ADO.NET to see if the original implementation was poorly written. Updated SSIS packages. What other technologies should I be considering for this job?

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • How to Store and Retrieve Images Using MsSQL (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into an MsSQL database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the MsSQL environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated. Thank you very much for your responses!

    Read the article

  • Delaying LINQ to SQL Select Query Execution

    - by Maxim Z.
    I'm building an ASP.NET MVC site that uses LINQ to SQL. In my search method that has some required and some optional parameters, I want to build a LINQ query while testing for the existence of those optional parameters. Here's what I'm currently thinking: using(var db = new DBDataContext()) { IQueryable<Listing> query = null; //Handle required parameter query = db.Listings.Where(l => l.Lat >= form.bounds.extent1.latitude && l.Lat <= form.bounds.extent2.latitude); //Handle optional parameter if (numStars != null) query = query.Where(l => l.Stars == (int)numStars); //Other parameters... //Execute query (does this happen here?) var result = query.ToList(); //Process query... Will this implementation "bundle" the where clauses and then execute the bundled query? If not, how should I implement this feature? Also, is there anything else that I can improve? Thanks in advance.

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • should i advocate migrating from access to (my)sql

    - by HotOil
    Hi: We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets. The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year. We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it. It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed? Thanks for any wisdom you can share..

    Read the article

  • What are the advantages of a query using a derived table(s) over a query not using them?

    - by AspOnMyNet
    I know how derived tables are used, but I still can’t really see any real advantages of using them. For example, in the following article http://techahead.wordpress.com/2007/10/01/sql-derived-tables/ the author tried to show benefits of a query using derived table over a query without one with an example, where we want to generate a report that shows off the total number of orders each customer placed in 1996, and we want this result set to include all customers, including those that didn’t place any orders that year and those that have never placed any orders at all( he’s using Northwind database ). But when I compare the two queries, I fail to see any advantages of a query using a derived table ( if nothing else, use of a derived table doesn't appear to simplify our code, at least not in this example): Regular query: SELECT C.CustomerID, C.CompanyName, COUNT(O.OrderID) AS TotalOrders FROM Customers C LEFT OUTER JOIN Orders O ON C.CustomerID = O.CustomerID AND YEAR(O.OrderDate) = 1996 GROUP BY C.CustomerID, C.CompanyName Query using a derived table: SELECT C.CustomerID, C.CompanyName, COUNT(dOrders.OrderID) AS TotalOrders FROM Customers C LEFT OUTER JOIN (SELECT * FROM Orders WHERE YEAR(Orders.OrderDate) = 1996) AS dOrders ON C.CustomerID = dOrders.CustomerID GROUP BY C.CustomerID, C.CompanyName Perhaps this just wasn’t a good example, so could you show me an example where benefits of derived table are more obvious? thanx

    Read the article

  • Convert SQL to LINQ in MVC3 with Ninject

    - by Jeff
    I'm using MVC3 and still learning LINQ. I'm having some trouble trying to convert a query to LINQ to Entities. I want to return an employee object. SELECT E.EmployeeID, E.FirstName, E.LastName, MAX(EO.EmployeeOperationDate) AS "Last Operation" FROM Employees E INNER JOIN EmployeeStatus ES ON E.EmployeeID = ES.EmployeeID INNER JOIN EmployeeOperations EO ON ES.EmployeeStatusID = EO.EmployeeStatusID INNER JOIN Teams T ON T.TeamID = ES.TeamID WHERE T.TeamName = 'MyTeam' GROUP BY E.EmployeeID, E.FirstName, E.LastName ORDER BY E.FirstName, E.LastName What I have is a few tables, but I need to get only the newest status based on the EmployeeOpertionDate. This seems to work fine in SQL. I'm also using Ninject and set my query to return Ienumerable. I played around with the group by option but it then returns IGroupable. Any guidance on converting and returning the property object type would be appreciated. Edit: I started writing this out in LINQ but I'm not sure how to properly return the correct type or cast this. public IQueryable<Employee> GetEmployeesByTeam(int teamID) { var q = from E in context.Employees join ES in context.EmployeeStatuses on E.EmployeeID equals ES.EmployeeID join EO in context.EmployeeOperations on ES.EmployeeStatusID equals EO.EmployeeStatusID join T in context.Teams on ES.TeamID equals T.TeamID where T.TeamName == "MyTeam" group E by E.EmployeeID into G select G; return q; } Edit2: This seems to work for me public IQueryable<Employee> GetEmployeesByTeam(int teamID) { var q = from E in context.Employees join ES in context.EmployeeStatuses on E.EmployeeID equals ES.EmployeeID join EO in context.EmployeeOperations.OrderByDescending(eo => eo.EmployeeOperationDate) on ES.EmployeeStatusID equals EO.EmployeeStatusID join T in context.Teams on ES.TeamID equals T.TeamID where T.TeamID == teamID group E by E.EmployeeID into G select G.FirstOrDefault(); return q; }

    Read the article

  • What could possibly be different between the table in a DataContext and an IQueryable<Table> when do

    - by Nate Bross
    I have a table, where I need to do a case insensitive search on a text field. If I run this query in LinqPad directly on my database, it works as expected Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase") In my application, I've got a repository which exposes IQueryable objects which does some initial filtering and it looks like this var dc = new MyDataContext(); public IQueryable<Table> GetAllTables() { var ret = dc.Tables.Where(t => t.IsActive == true); return ret; } In the controller (its an MVC app) I use code like this in an attempt to mimic the LinqPad query: var rpo = new RepositoryOfTable(); var tables = rpo.GetAllTables(); // for some reason, this does a CASE SENSITIVE search which is NOT what I want. tables = tables.Where(tbl => tbl.Title.Contains("StringWithAnyCase"); return View(tables); The column is defiend as an nvarchar(50) in SQL Server 2008. Any help or guidance is greatly appreciated!

    Read the article

  • delete row from result set in web sql with javascript

    - by Kaijin
    I understand that the result set from web sql isn't quite an array, more of an object? I'm cycling through a result set and to speed things up I'd like to remove a row once it's been found. I've tried "delete" and "splice", the former does nothing and the latter throws an error. Here's a piece of what I'm trying to do, notice the delete on line 18: function selectFromReverse(reverseRay,suggRay){ var reverseString = reverseRay.toString(); db.transaction(function (tx) { tx.executeSql('SELECT votecount, comboid FROM counterCombos WHERE comboid IN ('+reverseString+') AND votecount>0', [], function(tx, results){ processSelectFromReverse(results,suggRay); }); }, function(){onError}); } function processSelectFromReverse(results,suggRay){ var i = suggRay.length; while(i--){ var j = results.rows.length; while(j--){ console.log('searching'); var found = 0; if(suggRay[i].reverse == results.rows.item(j).comboid){ delete results.rows.item(j); console.log('found'); found++; break; } } if(found == 0){ console.log('lost'); } } }

    Read the article

  • Getting the first of a GROUP BY clause in SQL

    - by Michael Bleigh
    I'm trying to implement single-column regionalization for a Rails application and I'm running into some major headaches with a complex SQL need. For this system, a region can be represented by a country code (e.g. us) a continent code that is uppercase (e.g. NA) or it can be NULL indicating the "default" information. I need to group these items by some relevant information such as a foreign key (we'll call it external_id). Given a country and its continent, I need to be able to select only the most specific region available. So if records exist with the country code, I select them. If, not I want a records with the continent code. If not that, I want records with a NULL code so I can receive the default values. So far I've figured that I may be able to use a generated CASE statement to get an arbitrary sort order. Something like this: SELECT *, CASE region WHEN 'us' THEN 1 WHEN 'NA' THEN 2 ELSE 3 END AS region_sort FROM my_table WHERE region IN ('us','NA') OR region IS NULL GROUP BY external_id ORDER BY region_sort The problem is that without an aggregate function the actual data returned by the GROUP BY for a given row seems to be untameable. How can I massage this query to make it return only the first record of the region_sort ordered groups?

    Read the article

  • Saving multiple items per single database cell...

    - by eugeneK
    Hi, i have a countries list. Each user can check multiple countries. Once saved, this "user country list" will be used to get whether other users fit into countries certain user chose. Question is what would be the most efficient approach to this problem... I have one, one to save user selection as delimited list like Canada,USA,France ... in single varchar(max) field but problem with it would be that once user from Germany enters page i perform this check on. To search for Germany i would be needed to get all items and un-delimit each field to check against value or to use sql 'like' which again is pretty damn slow.. If you have better solution or some tips i would be glad to hear. Just to make sure, many users will have their own selections of countries from which and only they want to have users to land on their page. While millions of users will reach those pages. So the faster approach will be the better. technology, MSSQL and ASP.NET thanks

    Read the article

  • ASP.net MVC Linq-To-SQL Many-To-Many Field Binding

    - by user336858
    Hi there, The short version of this question is "Is there a way to gracefully handle database insertion for an object that has a many-to-many field that has been set up in a partial class?" Apologies if it's been asked before. Example Suppose I have a typical MVC setup with the tables: Posts {PostID, ...} Categories {CategoryID, ...} A post can have more than one category, and a category can identify more than one post. Thus suppose further that I need an extra table: PostCategories {PostID, CategoryID, ...} This handles the many-to-many relationship between posts and categories. As far as I know, there's no way to do this in Linq-to-SQL right now so I have to shoehorn it in by adding a partial Post class to the project to add that functionality. Something like: public partial class Post { public IEnumerable<Category> Categories{ get { ... } set { ... } } } So I can now create a "Create" view that automatically populates a "Categories" UI item. This is where the trouble starts. So here's my question: How do you get automatic object model binding to work cleanly with an object that has a many-to-many relationship to control? The workaround that makes many-to-many relationships possible relies on the Post object having a PostID in order to be associated with CategoryID(s), which is only issued after the Post object has been submitted for validation and insertion. Bit of a Catch22 here. Any terminology, links, or tips you can provide would be tremendously helpful!

    Read the article

  • SQL Server CE rollback does not undo delete.

    - by INTPnerd
    I am using SQL Server CE 3.5 and C# with the .NET Compact Framework 3.5. In my code I am inserting a row, then starting a transaction, then deleting that row from a table, and then doing a rollback on that transaction. But this does not undo the deletion. Why not? Here is my code: SqlCeConnection conn = ConnectionSingleton.Instance; conn.Open(); UsersTable table = new UsersTable(); table.DeleteAll(); MessageBox.Show("user count in beginning after delete: " + table.CountAll()); table.Insert( new User(){Id = 0, IsManager = true, Pwd = "1234", Username = "Me"}); MessageBox.Show("user count after insert: " + table.CountAll()); SqlCeTransaction transaction = conn.BeginTransaction(); table.DeleteAll(); transaction.Rollback(); transaction.Dispose(); MessageBox.Show("user count after rollback delete all: " + table.CountAll()); The messages indicate that everything works as expected until the very end where the table has a count of 0 indicating the rollback did not undo the deletion.

    Read the article

  • Returning data from SQL Server reporting web service call

    - by user79339
    Hi, I am generating a report that contains the version number. The version number is stored in the DB and retrieved/incremented as part of the report generation. The only problem is, I am calling SSRS via a web service call, which returns the generated report as a byte array. Is there any way to get the version number out of this generated report? For example to display a dialog that says "You generated Status Report, Version number 3". I tried passing in an output parameter and setting it inside the storedproc. Its modified when i execute it in sql management studio, but not in the reporting studio. Or atleast i can't seem to bind to the modified, post execution value (using expression "=Parameters!ReportVersion.Value"). Of course, I could get/increment the version number from database myself before calling the SSRS webservice and pass it along as a parameter to the Report, but that might cause concurrancy problems. On the whole, it just seems neater for the storedproc to access/generate a version number and return it to the ReportingEngine, which will generate the report with the version number and return the updated version number to the WebService client. Any thoughts/Ideas?

    Read the article

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • Self referencing update SQL statement for Informix

    - by CheeseConQueso
    Need some Informix SQL... Courses get a regular grade, but their associated labs get a grade of 'LAB'. I need to update the table so that the lab grade matches the course grade. Also, if there is no corresponding course for a lab, it means the course was canceled. In that case, I want to place a flag value of 'X' for its grade. Example data before update: id yr sess crs_no hrs grd 725 2009 FA COLL101 3.000000000000 C 725 2009 FA ENGL021 3.000000000000 FI 725 2009 FA ENGL021L 1.000000000000 LAB 725 2009 FA ENGL031 3.000000000000 FNI 725 2009 FA ENGL031L 1.000000000000 LAB 725 2009 FA MATH010 3.000000000000 FNI 725 2010 SP AOTE101 3.000000000000 C 725 2010 SP ENGL021L 1.000000000000 LAB 725 2010 SP ENGL031 3.000000000000 FI 725 2010 SP ENGL031L 1.000000000000 LAB 725 2010 SP MATH010 3.000000000000 FNI 726 2010 SP SPAN101 3.000000000000 FN Example data after update: id yr sess crs_no hrs grd 725 2009 FA COLL101 3.000000000000 C 725 2009 FA ENGL021 3.000000000000 FI 725 2009 FA ENGL021L 1.000000000000 FI 725 2009 FA ENGL031 3.000000000000 FNI 725 2009 FA ENGL031L 1.000000000000 FNI 725 2009 FA MATH010 3.000000000000 FNI 725 2010 SP AOTE101 3.000000000000 C 725 2010 SP ENGL021L 1.000000000000 X 725 2010 SP ENGL031 3.000000000000 FI 725 2010 SP ENGL031L 1.000000000000 FI 725 2010 SP MATH010 3.000000000000 FNI 726 2010 SP SPAN101 3.000000000000 FN I worked out a solution for this, but it required a lot of on-the-fly composite foreign keys built from concatenating the id, yr, sess, and substring'd crs_no. My solution is not only overkill, but it has gaps in it and it takes too long to process. I know there is an easier way to do this, but I've gone so far down one road that I am having trouble thinking of a different approach.

    Read the article

  • SQL indexes for "not equal" searches

    - by bortzmeyer
    The SQL index allows to find quickly a string which matches my query. Now, I have to search in a big table the strings which do not match. Of course, the normal index does not help and I have to do a slow sequential scan: essais=> \d phone_idx Index "public.phone_idx" Column | Type --------+------ phone | text btree, for table "public.phonespersons" essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone = '+33 1234567'; QUERY PLAN ------------------------------------------------------------------------------- Index Scan using phone_idx on phonespersons (cost=0.00..8.41 rows=1 width=4) Index Cond: (phone = '+33 1234567'::text) (2 rows) essais=> EXPLAIN SELECT person FROM PhonesPersons WHERE phone != '+33 1234567'; QUERY PLAN ---------------------------------------------------------------------- Seq Scan on phonespersons (cost=0.00..18621.00 rows=999999 width=4) Filter: (phone <> '+33 1234567'::text) (2 rows) I understand (see Mark Byers' very good explanations) that PostgreSQL can decide not to use an index when it sees that a sequential scan would be faster (for instance if almost all the tuples match). But, here, "not equal" searches are really slower. Any way to make these "is not equal to" searches faster? Here is another example, to address Mark Byers' excellent remarks. The index is used for the '=' query (which returns the vast majority of tuples) but not for the '!=' query: essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) = 'fr'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------ Index Scan using tld_idx on emailspersons (cost=0.25..4010.79 rows=97033 width=4) (actual time=0.137..261.123 rows=97110 loops=1) Index Cond: (tld(email) = 'fr'::text) Total runtime: 444.800 ms (3 rows) essais=> EXPLAIN ANALYZE SELECT person FROM EmailsPersons WHERE tld(email) != 'fr'; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Seq Scan on emailspersons (cost=0.00..27129.00 rows=2967 width=4) (actual time=1.004..1031.224 rows=2890 loops=1) Filter: (tld(email) <> 'fr'::text) Total runtime: 1037.278 ms (3 rows) DBMS is PostgreSQL 8.3 (but I can upgrade to 8.4).

    Read the article

  • collation conflict SQL/SERVER 2008

    - by vikitor
    Hello, I've been going around this but I haven't found a solution for my problem. My sql query is: SELECT dbo.Country.CtyRecID, dbo.Country.CtyShort, dbo.Notification.NotRecID, dbo.Notification.NotName, dbo.TemporalSuspension.TCtsCode, dbo.TemporalSuspension.TCtsCodeRecID, dbo.TaxPhylum.PhyName AS Taxon, dbo.TemporalSuspension.TCtsNotes, dbo.TemporalSuspension.TCtsRecID, dbo.TemporalSuspension.TCtsKgmRecID, CASE dbo.TemporalSuspension.TCtsKgmRecID WHEN 1 THEN 'Animals' WHEN 2 THEN 'Plants' ELSE 'All' END AS Kingdom FROM dbo.TemporalSuspension INNER JOIN dbo.Notification ON dbo.TemporalSuspension.TCtsStartNotRecID = dbo.Notification.NotRecID INNER JOIN dbo.Country ON dbo.TemporalSuspension.TCtsCtyRecID = dbo.Country.CtyRecID INNER JOIN dbo.TaxPhylum ON dbo.TemporalSuspension.TCtsCodeRecID = dbo.TaxPhylum.PhyRecID AND dbo.TemporalSuspension.TCtsCode LIKE 'PHY' UNION ALL SELECT dbo.Country.CtyRecID, dbo.Country.CtyShort, dbo.Notification.NotRecID, dbo.Notification.NotName, dbo.TemporalSuspension.TCtsCode, dbo.TemporalSuspension.TCtsCodeRecID, dbo.TaxClass.ClaName AS Taxon, dbo.TemporalSuspension.TCtsNotes, dbo.TemporalSuspension.TCtsRecID, dbo.TemporalSuspension.TCtsKgmRecID, CASE dbo.TemporalSuspension.TCtsKgmRecID WHEN 1 THEN 'Animals' WHEN 2 THEN 'Plants' ELSE 'All' END AS Kingdom FROM dbo.TemporalSuspension INNER JOIN dbo.Notification ON dbo.TemporalSuspension.TCtsStartNotRecID = dbo.Notification.NotRecID INNER JOIN dbo.Country ON dbo.TemporalSuspension.TCtsCtyRecID = dbo.Country.CtyRecID INNER JOIN dbo.TaxClass ON dbo.TemporalSuspension.TCtsCodeRecID = dbo.TaxClass.ClaRecID AND dbo.TemporalSuspension.TCtsCode LIKE 'CLA' But I don't understand why it doesn't work, I keep getting this error : Cannot resolve collation conflict for column 7 in SELECT statement. What's wrong? I've used this other times and I never got this problem. thanks

    Read the article

  • Problem with SQL Server "EXECUTE AS"

    - by Vilx-
    I've got the following setup: There is a SQL Server DB with several tables that have triggers set on them (that collect history data). These triggers are CLR stored procedures with EXECUTE AS 'HistoryUser'. The HistoryUser user is a simple user in the database without a login. It has enough permissions to read from all tables and write to the history table. When I backup the DB and then restore it to another machine (Virtual Machine in this case, but it does not matter), the triggers don't work anymore. In fact, no impersonation for the user works anymore. Even a simple statement such as this exec ('select 3') as user='HistoryUser' produces an error: Cannot execute as the database principal because the principal "HistoryUser" does not exist, this type of principal cannot be impersonated, or you do not have permission. I read in MSDN that this can occur if the DB owner is a domain user, but it isn't. And even if I change it to anything else (their recommended solution) this problem remains. If I create another user without login, I can use it for impersonation just fine. That is, this works just fine: create user TestUser without login go exec ('select 3') as user='TestUser' I do not want to recreate all those triggers, so is there any way how I can make the existing HistoryUser work? Bump: Sorry, but this is kinda urgent...

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • SQL connection to database repeating

    - by user175084
    ok now i am using the SQL database to get the values from different tables... so i make the connection and get the values like this: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); SqlCommand sqlCmd = new SqlCommand("SELECT * FROM Machines", connection); SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd); sqlCmd.Parameters.AddWithValue("@node", node); sqlDa.Fill(dt); connection.Close(); so this is one query on the page and i am calling many other queries on the page. So do i need to open and close the connection everytime...??? also if not this portion is common in all: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); can i like put it in one function and call it instead.. the code would look cleaner... i tried doing that but i get errors like: Connection does not exist in the current context. any suggestions??? thanks

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • How to extract the Sql Command from a Complied Linq Query

    - by Harry
    In normal (not compiled) Linq to Sql queries you can extract the SQLCommand from the IQueryable via the following code: SqlCommand cmd = (SqlCommand)table.Context.GetCommand(query); Is it possible to do the same for a compiled query? The following code provides me with a delegate to a compiled query: private static readonly Func<Data.DAL.Context, string, IQueryable<Word>> Query_Get = CompiledQuery.Compile<Data.DAL.Context, string, IQueryable<Word>>( (context, name) => from r in context.GetTable<Word>() where r.Name == name select r); When i use this to generate the IQueryable and attempt to extract the SqlCommand it doesn't seem to work. When debugging the code I can see that the SqlCommand returned has the 'very' useful CommandText of 'SELECT NULL AS [EMPTY]' using (var db = new Data.DAL.Context()) { IQueryable<Word> query = Query_Get(db, "word"); SqlCommand cmd = (SqlCommand)db.GetCommand(query); Console.WriteLine(cmd != null ? cmd.CommandText : "Command Not Found"); } I can't find anything in google about this particular scenario, as no doubt it is not a common thing to attempt... So.... Any thoughts?

    Read the article

  • LINQ to SQL Where Clause Optional Criteria

    - by RSolberg
    I am working with a LINQ to SQL query and have run into an issue where I have 4 optional fields to filter the data result on. By optional, I mean has the choice to enter a value or not. Specifically, a few text boxes that could have a value or have an empty string and a few drop down lists that could have had a value selected or maybe not... For example: using (TagsModelDataContext db = new TagsModelDataContext()) { var query = from tags in db.TagsHeaders where tags.CST.Equals(this.SelectedCust.CustCode.ToUpper()) && Utility.GetDate(DateTime.Parse(this.txtOrderDateFrom.Text)) <= tags.ORDDTE && Utility.GetDate(DateTime.Parse(this.txtOrderDateTo.Text)) >= tags.ORDDTE select tags; this.Results = query.ToADOTable(rec => new object[] { query }); } Now I need to add the following fields/filters, but only if they are supplied by the user. Product Number - Comes from another table that can be joined to TagsHeaders. PO Number - a field within the TagsHeaders table. Order Number - Similar to PO #, just different column. Product Status - If the user selected this from a drop down, need to apply selected value here. The query I already have is working great, but to complete the function, need to be able to add these 4 other items in the where clause, just don't know how!

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >