Search Results

Search found 29121 results on 1165 pages for 'sql 2000'.

Page 231/1165 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • Using Parameter Values In SQL Statement

    - by Dangerous
    I am trying to write a database script (SQL SERVER 2008) which will copy information from database tables on one server to corresponding tables in another database on a different server. I have read that the correct way to do this is to use a sql statement in a format similar to the following: INSERT INTO <linked_server>.<database>.<owner>.<table_name> SELECT * FROM <linked_server>.<database>.<owner>.<table_name> As there will be several tables being copied, I would like to declare variables at the top of the script to allow the user to specify the names of each server and database that are to be used. These could then be used throughout the script. However, I am not sure how to use the variable values in the actual SQL statements. What I want to achieve is something like the following: DECLARE @SERVER_FROM AS NVARCHAR(50) = 'ServerFrom' DECLARE @DATABASE_FROM AS NVARCHAR(50) = 'DatabaseTo' DECLARE @SERVER_TO AS NVARCHAR(50) = 'ServerTo' DECLARE @DATABASE_TO AS NVARCHAR(50) = 'DatabaseTo' INSERT INTO @SERVER_TO.@DATABASE_TO.dbo.TableName SELECT * FROM @SERVER_FROM.@DATABASE_FROM.dbo.TableName ... How should I use the @ variables in this code in order for it to work correctly? Additionally, do you think my method above is correct for what I am trying to achieve and should I be using NVARCHAR(50) as my variable type or something else? Thanks

    Read the article

  • Choose a XML node in SQL Server based on max value of a child element

    - by Jay
    I am trying to select from SQL Server 2005 XML datatype some values based on the max data that is located in a child node. I have multiple rows with XML similar to the following stored in a field in SQL Server: <user> <name>Joe</name> <token> <id>ABC123</id> <endDate>2013-06-16 18:48:50.111</endDate> </token> <token> <id>XYX456</id> <endDate>2014-01-01 18:48:50.111</endDate> </token> </user> I want to perform a select from this XML column where it determines the max date within the token element and would return the datarows similar to the result below for each record: Joe XYZ456 2014-01-01 18:48:50.111 I have tried to find a max function for xpath that would all me to select the correct token element but I couldn't find one that would work. I also tried to use the SQL MAX function but I wasn't able to get it working with that method either. If I only have a single token it of course works fine but when I have more than one I get a NULL, most likely because the query doesn't know which date to pull. I was hoping there would be a way to specify a where clause [max(endDate)] on the token element but haven't found a way to do that. Here is an example of the one that works when I only have a single token: SELECT XMLCOL.query('user/name').value('.','NVARCHAR(20)') as name XMLCOL.query('user/token/id').value('.','NVARCHAR(20)') as id XMLCOL.query('user/token/endDate').value(,'xs:datetime(.)','DATETIME') as endDate FROM MYTABLE

    Read the article

  • Linq to SQL code generator features

    - by Anders Abel
    I'm very fond of Linq to SQL and the programming model it encourages. I think that in many cases when you are in control of both the database schema and the code it is not worth the effort to have different relational and object models for the data. Working with Linq to SQL makes it simple to have type safe data access from .NET, using the partial extension methods to implement business rules. Unfortunately I do not like the dbml designer due to the lack of a schema refresh function. So far I have used SqlMetal, but that lacks the customization options of the dbml designer. Because of that I've started working on a tool which regenerates the whole code file like SqlMetal, but has the ability to do the customizations that are available in the dbml designer (and maybe more in the future). The customizations will be described in an xml file which only contains those parts that shouldn't have default values. This should keep the xml file size down as well as the maintenance burden of it. To help me focus on the right features, I would like to know: What would be your favourite feature in a linq to sql code generator?

    Read the article

  • SQL string manipulation to return multiple rows

    - by Andy Jacobs
    I'm an experienced programmer, but relatively new to SQL. We're using Oracle 10 and 11. I have a system in place using SQL that combines actual rows with virtual rows (e.g. "SELECT 1 from DUAL") doing unions and intersects as needed, which all seems to work. My problem is that I need to combine this system which is expecting rows of data, with new data that will have the data in (let's say for simplification) comma delimited strings. So I think what I need is a way to convert a string like: "5,6,7,8" into 4 rows with one column each, with "5" in the first row, "6" in the second, etc. In other languages, I'd do a "Split" with comma as the delimiter. Of course, the data won't always have 4 entries. There's a second question, but I'll ask it separately. But I suspect it will simplify things, if possible, if the solution to the above could be used as a table in another SQL statement (i.e., to work with my existing system). Thanks for any help.

    Read the article

  • Performance of VIEW vs. SQL statement

    - by Matt W.
    I have a query that goes something like the following: select <field list> from <table list> where <join conditions> and <condition list> and PrimaryKey in (select PrimaryKey from <table list> where <join list> and <condition list>) and PrimaryKey not in (select PrimaryKey from <table list> where <join list> and <condition list>) The sub-select queries both have multiple sub-select queries of their own that I'm not showing so as not to clutter the statement. One of the developers on my team thinks a view would be better. I disagree in that the SQL statement uses variables passed in by the program (based on the user's login Id). Are there any hard and fast rules on when a view should be used vs. using a SQL statement? What kind of performance gain issues are there in running SQL statements on their own against regular tables vs. against views. (Note that all the joins / where conditions are against indexed columns, so that shouldn't be an issue.) EDIT for clarification... Here's the query I'm working with: select obj_id from object where obj_id in( (select distinct(sec_id) from security where sec_type_id = 494 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) and sec_obj_id in ( select obj_id from object where obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) ) ) ) or (obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) and obj_id not in (select sec_obj_id from security where sec_type_id = 494) )

    Read the article

  • Which SQL statements to execute with intersection / junction tables

    - by user1455103
    Here a simplified database layout One condo can hold multiple properties (flats, garage boxes, etc) - 1->n relationship One owner can have multiple properties in the same condo and properties can have more than one owner (m->n changed to 1->n with the junction table) One condo can have multiple owners - 1->n Some additional clarification: A owner is a member of a condo. A condo is made of properties belonging to owners BUT a owner is not linked to a property directly (there can be no relation between a property and a owner for a certain time BUT there will ALWAYS be a relation between a owner and a condo). Reason for this: the agent managing the condo will first create a list of owners and a list of properties. It is only later thet he will "link" each property to one or multiple owners (or inversely) I'm quite new to SQL. What SQL statements should I execute to: SELECT, for a specific condo (WHERE condition), the properties and their respective owners (all properties should be listed even if owners are null) SELECT, for a specific condo (WHERE condition), the owners along with their properties (all owners should be listed even if properties are null) UPDATE / DELETE existing owners (I'm uncertain about how to handle the operation for the junction tables. Should I first check if there is an entry in the junction table or not ?) UPDATE / DELETE existing properties (same concern) INSERT new owners (should I use two different SQL statements depending if the owner should be linked to a property or NOT - IF condition ?) INSERT new properties (same question as above) Could you be as clear and generic as possible so that it can be reused ? :-)

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • psqlODBC won't load after installing MS SQL ODBC driver on RHEL 6

    - by Kapil Vyas
    I had the PostgreSQL drivers working on my RHEL 6. But after I installed Microsoft® SQL Server® ODBC Driver 1.0 for Linux I can no longer connect to PosgreSQL data sources. I can connect to SQL Server data sources fine. When I had this same issue a week ago I uninstalled MS SQL Server ODBC driver from Linux and it fixed the issue. I had to copy the psqlodbcw.so files from another machine to replenish the files. I don't want to do the same this time. I want both drivers to work on Linux. This time around the setup files got deleted: /usr/lib64/libodbcpsqlS.so. Replenishing it did not fix the issue. I kept getting the following error in spite of the file being present with rwx permisions: [root@localhost lib64]# isql -v STUDENT dsname pwd12345 [01000][unixODBC][Driver Manager]Can't open lib '/usr/lib64/psqlodbc.so' : file not found [ISQL]ERROR: Could not SQLConnect [root@localhost lib64]# Here is a printout of the file permissions: [root@localhost lib64]# ls -al p*.so lrwxrwxrwx. 1 root root 12 Dec 7 09:15 psqlodbc.so -> psqlodbcw.so -rwxr-xr-x. 1 root root 519496 Dec 7 09:35 psqlodbcw.so and my odbcinst.ini file looks as follows: [PostgreSQL] Description=ODBC for PostgreSQL Driver=/usr/lib/psqlodbc.so Driver64=/usr/lib64/psqlodbc.so Setup=/usr/lib/libodbcpsqlS.so Setup64=/usr/lib64/libodbcpsqlS.so FileUsage=1 UsageCount=4 I also referred to this link: http://mailman.unixodbc.org/pipermail/unixodbc-support/2010-September.txt

    Read the article

  • figuring out which field to look for a value in with SQL and perl

    - by Micah
    I'm not too good with SQL and I know there's probably a much more efficient way to accomplish what I'm doing here, so any help would be much appreciated. Thanks in advance for your input! I'm writing a short program for the local school high school. At this school, juniors and seniors who have driver's licenses and cars can opt to drive to school rather than ride the bus. Each driver is assigned exactly one space, and their DLN is used as the primary key of the driver's table. Makes, models, and colors of cars are stored in a separate cars table, related to the drivers table by the License plate number field. My idea is to have a single search box on the main GUI of the program where the school secretary can type in who/what she's looking for and pull up a list of results. Thing is, she could be typing a license plate number, a car color, make, and model, someone driver's name, some student driver's DLN, or a space number. As the programmer, I don't know what exactly she's looking for, so a couple of options come to mind for me to build to be certain I check everywhere for a match: 1) preform a couple of SELECT * FROM [tablename] SQL statements, one per table and cram the results into arrays in my program, then search across the arrays one element at a time with regex, looking for a matched pattern similar to the search term, and if I find one, add the entire record that had a match in it to a results array to display on screen at the end of the search. 2) take whatever she's looking for into the program as a scaler and prepare multiple select statements around it, such as SELECT * FROM DRIVERS WHERE DLN = $Search_Variable SELECT * FROM DRIVERS WHERE First_Name = $Search_Variable SELECT * FROM CARS WHERE LICENSE = $Search_Variable and so on for each attribute of each table, sticking the results into a results array to show on screen when the search is done. Is there a cleaner way to go about this lookup without having to make her specify exactly what she's looking for? Possibly some kind of SQL statement I've never seen before?

    Read the article

  • Getting a Specified Cast is not valid while importing data from Excel using Linq to SQL

    - by niceoneishere
    This is my second post. After learning from my first post how fantastic is to use Linq to SQL, I wanted to try to import data from a Excel sheet into my SQL database. First My Excel Sheet: it contains 4 columns namely ItemNo ItemSize ItemPrice UnitsSold I have a created a database table with the following fields table name ProductsSold Id int not null identity --with auto increment set to true ItemNo VarChar(10) not null ItemSize VarChar(4) not null ItemPrice Decimal(18,2) not null UnitsSold int not null Now I created a dal.dbml file based on my database and I am trying to import the data from excel sheet to db table using the code below. Everything is happening on click of a button. private const string forecast_query = "SELECT ItemNo, ItemSize, ItemPrice, UnitsSold FROM [Sheet1$]"; protected void btnUpload_Click(object sender, EventArgs e) { var importer = new LinqSqlModelImporter(); if (fileUpload.HasFile) { var uploadFile = new UploadFile(fileUpload.FileName); try { fileUpload.SaveAs(uploadFile.SavePath); if(File.Exists(uploadFile.SavePath)) { importer.SourceConnectionString = uploadFile.GetOleDbConnectionString(); importer.Import(forecast_query); gvDisplay.DataBind(); pnDisplay.Visible = true; } } catch (Exception ex) { Response.Write(ex.Source.ToString()); lblInfo.Text = ex.Message; } finally { uploadFile.DeleteFileNoException(); } } } // Now here is the code for LinqSqlModelImporter public class LinqSqlModelImporter : SqlImporter { public override void Import(string query) { // importing data using oledb command and inserting into db using LINQ to SQL using (var context = new WSDALDataContext()) { using (var myConnection = new OleDbConnection(base.SourceConnectionString)) using (var myCommand = new OleDbCommand(query, myConnection)) { myConnection.Open(); var myReader = myCommand.ExecuteReader(); while (myReader.Read()) { context.ProductsSolds.InsertOnSubmit(new ProductsSold() { ItemNo = myReader.GetString(0), ItemSize = myReader.GetString(1), ItemPrice = myReader.GetDecimal(2), UnitsSold = myReader.GetInt32(3) }); } } context.SubmitChanges(); } } } can someone please tell me where am I making the error or if I am missing something, but this is driving me nuts. When I debugged I am getting this error when casting from a number the value must be a number less than infinity I really appreciate it

    Read the article

  • LINQ to SQL - Insert yielding strange behavior.

    - by Isaac
    Hi, I'm trying to insert several newly created items to the database. I have a LINQ2SQL generated class called "Order". Inside order, there's a property called "OrderItems" which is also generated by LINQ2SQL and represents the Items of that Order. So far so good. The problem I'm having right now, is when I try to add more than one newly created OrderItem inside Order. I.E: Order o = orderWorker.GetById( 10 ); for( int i=0; i < 5; ++i ) { OrderItem oi =new OrderItem { Order = order, Price = 100, ShippingPrice = 100, ShippingMethod = ... }; o.OrderItems.Add( oi ); } context.SubmitChanges(); Unfortunately, only a single entity is being added. Yes, I checked the generated SQL by adding Context.Log = Console.Out, and yes, only one statement was created. Any clues? By the way I know I'm not using InsertOnSubmit, by the documentation says: You can explicitly request Inserts by using InsertOnSubmit. Alternatively, LINQ to SQL can infer Inserts by finding objects connected to one of the known objects that must be updated. For example, if you add an Untracked object to an EntitySet(TEntity) or set an EntityRef(TEntity) to an Untracked object, you make the Untracked object reachable by way of tracked objects in the graph. While processing SubmitChanges, LINQ to SQL traverses the tracked objects and discovers any reachable persistent objects that are not tracked. Such objects are candidates for insertion into the database. Thank you very much for your time.

    Read the article

  • How to make a Stored Procedure that takes in XML and uses that xml as an Update + call this stored p

    - by chobo2
    Hi I am using ms sql server 2005 and I want to do a mass update. I am thinking that I might be able to do it with sending an xml document to a stored procedure. So I seen many examples on how to do it for insert CREATE PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData XML) AS INSERT INTO dbo.UserTable(CreateDate) SELECT @UpdatedProdData.value('(/ArrayOfUserTable/UserTable/CreateDate)[1]', 'DATETIME') But I am not sure how it would look like for an update. I am also unsure how do I pass in the xml through ado.net? Do I pass it as a string through a parameter or what? I know sqlDataApater has a batch update method but I am using linq to sql. So I rather keep using it. So if this works I would be able to grab all records with linq to sql and have them as objects. Then manipulate the objects and use xml seralization. Finally I could just use ado.net simple to send the xml to the server. This might be slower then the sqlDataAdapter but I am willing to take that hit if I can keep using objects.

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • MySQL to SQL Server ODBC Connector?

    - by Scott C.
    My boss wants to have data in MySQL DBs used for our website to be "linked and synced" with a Financial Server that has its DB in SQL Server. Sooooo...even though I have no idea how to accomplish this, this just sounds like an absolute nightmare especially since the MySQL DB is most likely going to be hosted in the cloud and not on a machine next to the Financial Server. Any ideas how to accomplish this? (within reason?) Also, his big thing is he wants to basically pull up the data from any record a user enters and using data pulled from that do all sorts of calculations using ANOTHER program that stores its data (apparently) in SQL Server. Thinking of all the data I might have to convert makes me very uneasy. Please tell a ODBC eliminates complicated junk like this. :/ I'm trying to talk him into just having MySQL do a nightly dump into a CSV file or something and using that (rather than connector) to update the SQL Server DBs. I guess I'm just not that comfortable with a server and/or programming I have no say over being connected DIRECTLY to my MySQL DB for the website. If there's no good answer for this, can anyone offer a suggestion as to what I can say to talk him out of this? (I'm a low-level IT guy w/ a decent grasp on programming...but I'm no expert - should I try to push this off to a seasoned IT pro?) Thanks in advance.

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • Slow T-SQL query, convert to LINQ to Object

    - by yimbot
    I have a T-SQL query which populates a DataSet from an MSSQL database. string qry = "SELECT * FROM EvnLog AS E WHERE TimeDate = (SELECT MAX(TimeDate) From EvnLog WHERE Code = E.Code) AND (Event = 8) AND (TimeDate BETWEEN '" + Start + "' AND '" + Finish + "')" The database is quite large and being the type of nested query it is, the Data Adapter can take a number of minutes to fill the DataSet. I have extended the DataAdapter's timeout value to 480 seconds to combat it, but the client still complains about slow performance and occassional timeouts. To combat this, I was considering executing a simpler query (ie. just taking the date range) and then populating a Generic List which I could then execute a Linq query against. The simple query executes very quickly which is great. However, I cannot seem to build a Linq query which generates the same results as the T-SQL query above. Is this the best solution to this problem? Can anyone provide tips on rewriting the above T-SQL into Linq? I have also considered using a DataView, but cannot seem to get the results from that either.

    Read the article

  • Map inheritance from generic class in Linq To SQL

    - by Ksenia Mukhortova
    Hi everyone, I'm trying to map my inheritance hierarchy to DB using Linq to SQL: Inheritance is like this, classes are POCO, without any LINQ to SQL attributes: public interface IStage { ... } public abstract class SimpleStage<T> : IStage where T : Process { ... } public class ConcreteStage : SimpleStage<ConcreteProcess> { ... } Here is the mapping: <Database Name="NNN" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="dbo.Stage" Member="Stage"> <Type Name="BusinessLogic.Domain.IStage"> <Column Name="ID" Member="ID" DbType="Int NOT NULL IDENTITY" IsPrimaryKey="true" IsDbGenerated="true" AutoSync="OnInsert" /> <Column Name="StageType" Member="StageType" IsDiscriminator="true" /> <Type Name="BusinessLogic.Domain.SimpleStage" IsInheritanceDefault="true"> <Type Name="BusinessLogic.Domain.ConcreteStage" IsInheritanceDefault="true" InheritanceCode="1"/> </Type> </Type> </Table> </Database> In the runtime I get error: System.InvalidOperationException was unhandled Message="Mapping Problem: Cannot find runtime type for type mapping 'BusinessLogic.Domain.SimpleStage'." Neither specifying SimpleStage, nor SimpleStage<T> in mapping file helps - runtime keeps producing different types of errors. DC is created like this: StreamReader sr = new StreamReader(@"MappingFile.map"); XmlMappingSource mapping = XmlMappingSource.FromStream(sr.BaseStream); DataContext dc = new DataContext(@"connection string", mapping); If Linq to SQL doesn't support this, could you, please, advise some other ORM, which does. Thanks in advance, Regards! Ksenia

    Read the article

  • Create table and call it from sql

    - by user1770816
    I have a PL/SQL function which creates a new temporary table. For creating the table I use execute immediate. When I run my function in oracle sql developer everything is ok; the function creates the temp table without errors. But when U use SQL: Select function_name from table_name I get an exceptions: ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML ORA-06512: at "SYSTEM.GET_USERS", line 10 14552. 00000 - "cannot perform a DDL, commit or rollback inside a query or DML " *Cause: DDL operations like creation tables, views etc. and transaction control statements such as commit/rollback cannot be performed inside a query or a DML statement. Update Sorry, write from tablet PC and have problems with format text. My function: CREATE OR REPLACE FUNCTION GET_USERS ( USERID IN VARCHAR2 ) RETURN VARCHAR2 AS request VARCHAR2(520) := 'CREATE GLOBAL TEMPORARY TABLE '; BEGIN request := request || 'temp_table_' || userid || '(user_name varchar2(50), user_id varchar2(20), is_administrator varchar2(5)') || ' ON COMMIT PRESERVE ROWS'; EXECUTE IMMEDIATE (request); RETURN 'true'; END GET_USERS;

    Read the article

  • C# SQL Data Adapter Fill on existing typed Dataset

    - by René
    I have an option to choose between local based data storing (xml file) or SQL Server based. I already created a long time ago a typed dataset for my application to save data local in the xml file. Now, I have a bool that changes between Server based version and local version. If true my application get the data from the SQL Server. I'm not sure but It seems that Sql Adapter's Fill Method can't fill the Data in my existing schema SqlCommand cmd = new SqlCommand("Select * FROM dbo.Categories WHERE CatUserId = 1", _connection); cmd.CommandType = CommandType.Text; _sqlAdapter = new SqlDataAdapter(cmd); _sqlAdapter.TableMappings.Add("Categories", "dbo.Categories"); _sqlAdapter.Fill(Program.Dataset); This should fill my data from dbo.Categories to Categories (in my local, typed dataset). but it doesn't. It creates a new table with the name "Table". It looks like it can't handle the existing schema. I can't figure it out. Where is the problem? btw. of course the database request I do isn't very useful that way. It's just a simplified version for testing...

    Read the article

  • SQL Scenario of allocating ids to user

    - by Enjoy coding
    Hi, I have an sql scenario as follows which I have been trying to improve. There is a table 'Returns' which is having ids of the returned goods against a shop for an item. Its structure is as below. Returns ------------------------- Return ID | Shop | Item ------------------------- 1 Shop1 Item1 2 Shop1 Item1 3 Shop1 Item1 4 Shop1 Item1 5 Shop1 Item1 There is one more table Supplier with Shop, supplier and Item as shown below. Supplier --------------------------------- Supplier | Shop | Item | Volume --------------------------------- supp1 Shop1 Item1 20% supp2 Shop1 Item1 80% Now as you see supp1 is supplying 20 % of total item1 volume and supp2 is supplying 80% of Item1 to shop1. And there were 5 return of items against the same Item1 for same Shop1. Now I need to allocate any four return IDs to Supp1 and remaining one return Id to supp2. This allocation of numbers is based on the ratio of the supplied volume percentage of the supplier. This allocation varies depending on the ratio of volume of supplied items. Now I have tried a method of using RANKs as shown below by use of temp tables. temp table 1 will have Shop, Return Id, Item, Total count of return IDs and Rank of the return id. temp table 2 will have shop, Supplier, Item and his proportion and rank of proportion. Now I am facing the difficulty in allocating top return ids to top supplier as illustrated above. As SQL doesnt have loops how can I achieve this. I have been tying several ways of doing this. Please advice. My environment is Teradata (ANSI SQL is enough). Thanks in advance.

    Read the article

  • SQL - Query range between two dates (NON-VBA)

    - by Mohgeroth
    I see various topics on this around stack overflow but none that fit the contect of MS-Access... Given a starting date and an ending date, is there a way through SQL to return records for each given month within the time frame? EG: Between #1/1/2010# and #12/31/2010# results #1/4/2010# #1/11/2010# ..... #12/27/2010# Restrictions MS-Access 2003 :No Case/Loops inside the SQL (IIF statements are good) This is a view only, NO VBA will be used since the data will not be tampered with. Disconnected recordset is my last option. I would prefer to find out theres some way to call your customized functions in the SQL to help return these values... some class stored on a global scope while you iterate through this date range maybe... Is this possible? I see many no's, but if there was a way to pass a value into a function I could find a way to make this work. Sad that I don't have a way to simulate a stored procedure without using a d/c recordset, at least that I know of... any experts out there know a way?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >