Search Results

Search found 855 results on 35 pages for 'mssql'.

Page 10/35 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • SQL tables using VARCHAR with UTF8 (with respect to multi byte character length)

    - by Elius
    Like in Oracle VARCHAR( 60 CHAR ) I would like to specify a varchar field with variable length depending on the inserted characters. for example: create table X (text varchar(3)) insert into X (text) VALUES ('äöü') Should be possible (with UTF8 as the default charset of the database). On DB2 I got this Error: DB2 SQL Error: SQLCODE=-302, SQLSTATE=22001 (Character data, right truncation occurred; for example, an update or insert value is a string that is too long for the column, or a datetime value cannot be assigned to a host variable, because it is too small.) I'm looking for solutions for DB2, MsSql, MySql, Hypersonic.

    Read the article

  • MSSQL + Copy data from one server database table to the other

    - by lucky
    Hello All, I dont know how to copy table data from one server - database - table TO other sever - database - table. If it is with in the same server and different databases, i have used the following SELECT * INTO DB1..TBL1 FROM DB2..TBL1 (to copy with table structure and data) INSERT INTO DB1..TBL1(F1, F2) SELECT F1, F2 FROM DB2..TBL1 (copy only data) Now my question is copy the data from SERVER1 -- DB1-- TBL1 to SERVER2 -- DB2 --TBL2 Thanks in advance!

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • Centralizing / Abstracting MSSQL Data from Multiple Tables / Databases

    - by davemackey
    If one has a number of databases (due to separate application front-ends) that provide a complete picture - for example a CRM, accounting, and product database - what methods are available to centralize/abstract this data for easy reporting? Essentially, I'm wondering if there is a way to automatically pull data from multiple databases into a central repository that is continuously updated from the three databases and which can be used for reporting? I'm also open to alternative best practice suggestions?

    Read the article

  • Hibernate and mssql inner join rowcount

    - by ez2sarang
    I am struggling with Hibernate Criteria. My aim is to create the following request using Hibernate Criteria : select count(*) as y0_ from PInterface this_ inner join Product product2_ on this_.product_id=product2_.id where this_.product_interface_type_id=? Here is my code: @Entity @Table(name = "PInterface") public class PInterface { @Id @GeneratedValue @Column(name = "id", nullable = false, insertable = false, updatable = false) private int id; @Column(name = "product_id") private int productId; @Column(name = "product_interface_type_id") private int type; @ManyToOne(optional=false) @JoinColumn(name = "product_id", referencedColumnName = "id", insertable=false, updatable=false) private Product product; } @Entity @Table(name = "Product") public class Product { @Id @GeneratedValue @Column(name = "id", nullable = false, insertable = false, updatable = false) private int id; private String name; } //Criteria is : Object criteria = sessionFactory.getCurrentSession() .createCriteria(PInterface.class) .add(Restrictions.eq("type", 1)) .setProjection(Projections.rowCount()) .uniqueResult() ; However, the results ... select count(*) as y0_ from PInterface this_ where this_.product_interface_type_id=? Where Inner join? Thank you for help!

    Read the article

  • TSQL - MSSQL 2008 add a column and update it in same stored procedure

    - by TortTupper
    if I have a stored procedure say create procure w AS ALTER TABLE t ADD x char(1) UPDATE t set x =1 Even when it lets me create that stored procedure (if I create it when x exists), when it runs, there is an error on the UPDATE statement because column x doesn't exist. What's the conventional way to deal with this, it must come up all the time? I can work around it by putting the UPDATE inside EXEC, is there another/better way? Thanks

    Read the article

  • Unit test with live data

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with live data. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without live data, I would not fore sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality by it self?

    Read the article

  • Unit test insert/update/delete

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with data that actually goes into the database. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without saving the data to DB, I would not for sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality in one test?

    Read the article

  • Should a webserver in the DMZ be allowed to access MSSQL in the LAN?

    - by Allen
    This should be a very basic question and I tried to research it and couldn't find a solid answer. Say you have a web server in the DMZ and a MSSQL server in the LAN. IMO, and what I've always assumed to be correct, is that the web server in the DMZ should be able to access the MSSQL server in the LAN (maybe you'd have to open a port in the firewall, that'd be ok IMO). Our networking guys are now telling us that we can't have any access to the MSSQL server in the LAN from the DMZ. They say that anything in the DMZ should only be accessible FROM the LAN (and web), and that the DMZ should not have access TO the LAN, just as the web does not have access to the LAN. So my question is, who is right? Should the DMZ have access to/from the LAN? Or, should access to the LAN from the DMZ be strictly forbidden. All this assumes a typical DMZ configuration.

    Read the article

  • How can we recover/restore lost/overwritten data in our MSSQL 2008 table?

    - by TeTe
    I am in serious trouble and I am seeking professional advices here. We are using MSSQL server 2008. We removed primary key, replaced exiting data with new data resulted losing our critical business data in its child tables on MSSQL Server. It was completely human mistake and we didn't have disk failure. 1) The last backup file was a month ago which means it is useless. 2) We created Maintenance Plans to backup our database at 12AM everyday but those files are nohwere to be found 3) A friend of mine said we can recover from Transaction Logs. When I go to TaskRestore Transaction log is dimmed/disabled. 4) I checked ManagementMaintenance Plans. I can't find any restored point there. It seems that our maintenance plan hasn't been working. Is there any third party tool to recover lost/overwritten data from MSSQL table? Thanks a lot.

    Read the article

  • How can I limit the number of connections to an mssql server from my tomcat deployed java applicatio

    - by CJ
    Hi, I have an application that is deployed on tomcat on server A and sends queries to a huge variety of mssql databases on an server B. I am concerned that my application could overload this mssql database server and would like some way to preventing it making requests to connect to any database on that server if some arbitrary number of connections were already in existence and unclosed. I am looking at using connection pooling but am under the impression that this will only pool connections to a specific database on the mssql server, I want to control the total of these combined connections that will occur to many different databases (incidentally I can only find out the names of individual db's dynamically as they change day to day). Will connection pooling take care of this for me, are am I looking at this from the wrong perspective? I have no access to the configuration of the mssql server. Links to tutorials or working examples of your suggested solution are most welcome! Thanks, Caroline

    Read the article

  • Best Ruby ORM for Wrapping around Legacy MSSQL Database?

    - by Technocrat
    Hi. I found this answer and it sounds like almost exactly what I'm doing. I have heard mixed answers about whether or not datamapper can support mssql through dataobjects. Basically, we have an app that uses a consistently structured database, consistently named tables, etc in MSSQL. We're making all kinds of tools and stuff that have to interact with it, some of them remotely and so I decided that we need to create some common, simple access point to do read/write operations on the MSSQL app since it's API is all C# and other things I despise. Now my question is if anyone has any examples or projects they know of where a ruby ORM can essentially create models for another application's legacy database by defining the conventions of each model's pkeys, fkeys, table names, etc. Sequel is the only ORM I've used with MSSQL but never to do anything quite like this. Any suggestions?

    Read the article

  • SQL Quey slow in .NET application but instantaneous in SQL Server Management Studio

    - by user203882
    Here is the SQL SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.trustaccountlogid = ( SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = 70402 AND ta.TrustAccountID = 117249 AND tal.TrustAccountLogDate < '3/1/2010 12:00:00 AM' ) Basicaly there is a Users table a TrustAccount table and a TrustAccountLog table. Users: Contains users and their details TrustAccount: A User can have multiple TrustAccounts. TrustAccountLog: Contains an audit of all TrustAccount "movements". A TrustAccount is associated with multiple TrustAccountLog entries. Now this query executes in milliseconds inside SQL Server Management Studio, but for some strange reason it takes forever in my C# app and even timesout (120s) sometimes. Here is the code in a nutshell. It gets called multiple times in a loop and the statement gets prepared. cmd.CommandTimeout = Configuration.DBTimeout; cmd.CommandText = "SELECT tal.TrustAccountValue FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID1 AND ta.TrustAccountID = @TrustAccountID1 AND tal.trustaccountlogid = (SELECT MAX (tal.trustaccountlogid) FROM TrustAccountLog AS tal INNER JOIN TrustAccount ta ON ta.TrustAccountID = tal.TrustAccountID INNER JOIN Users usr ON usr.UserID = ta.UserID WHERE usr.UserID = @UserID2 AND ta.TrustAccountID = @TrustAccountID2 AND tal.TrustAccountLogDate < @TrustAccountLogDate2 ))"; cmd.Parameters.Add("@TrustAccountID1", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID1", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountID2", SqlDbType.Int).Value = trustAccountId; cmd.Parameters.Add("@UserID2", SqlDbType.Int).Value = userId; cmd.Parameters.Add("@TrustAccountLogDate2", SqlDbType.DateTime).Value =TrustAccountLogDate; // And then... reader = cmd.ExecuteReader(); if (reader.Read()) { double value = (double)reader.GetValue(0); if (System.Double.IsNaN(value)) return 0; else return value; } else return 0;

    Read the article

  • LINQ2SQL and MS SQL 2000

    - by artvolk
    Good day! I have used LINQ2SQL with MS SQL 2005, but now I need to use it with MS SQL 2000. I have found only article on MSDN that tells about Skip() and Take() oddities on MS SQL 2000 (that's because it lacks ROW_NUMBER(), I suppose) and nothing more. Anyway, does anybody have expirience with LINQ2SQL and MS SQL 2000 combination. P.S. Just wondering: is it possible to model LINQ2SQL class on view, not a real table?

    Read the article

  • Select a distinct record, filtering is not working..

    - by help_inmssql
    Hello EVery I am new to SQl. query to result in the following records. I have a table with records as c1 c2 c3 c4 c5 c6 1 John 2.3.2010 12:09:54 4 7 99 2 mike 2.3.2010 13:09:59 8 6 88 3 ahmad 2.3.2010 13:09:59 1 9 19 4 Jim 23.3.2010 16:35:14 4 5 99 5 run 23.3.2010 12:09:54 3 8 12 I want to fecth only records. i.e only 1 latest record per day. If both of them happen at the same time, sort by C1.so in 1&3 it should fetch 3. 3 ahmad 2.3.2010 14:09:59 1 9 19 4 Jim 23.3.2010 16:35:14 4 5 99 I have got a new problem in this. If i filter the records based on conditions the last record is missing. I tried many ways but still it is failing. Here update_log is my table. SELECT * FROM update_log t1 WHERE (t1.c3) = ( SELECT MAX(t2.c3) FROM update_log t2 WHERE DATEDIFF(dd,t2.c3, t1.c3) = 0 ) and t1.c3 > '02.03.2010' and t1.modified_at <= '22.03.2010' ORDER BY t1.c3 ASC. But i am not able to retrieve the record 4 Jim 23.3.2010 16:35:14 4 5 99 I dont know this query results in only 3 ahmad 2.3.2010 14:09:59 1 9 19 The format of the column c3 is datetime. I am pumping the data into the column as using $date = date("d.m.Y H:i",time()); -- simple date fetch of today. Another query that i tried for the same purpose. select * from (select convert(varchar(10), c3,104) as date, max(c3) as max_date, max(c1) as Nr from update_log group by convert(varchar(10), c3,104)) as t2 inner join update_log as t1 on (t2.max_date = t1.c3 and convert(varchar(10), c3,104) = date and t1.[c1]= Nr) WHERE t1.c3 >= '02.03.2010' and t1.c3 <= '16.04.2010' . I even tried this way..the same error last record is not coming..

    Read the article

  • MS SQL Error: Primary file group is full

    - by aximili
    I have a very large table in my database and I am starting to get this error Could not allocate a new page for database 'mydatabase' because of insufficient disk space in filegroup 'PRIMARY'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. How do you fix this error? I don't understand the suggestions there.

    Read the article

  • Count times ID appears in a table and return in row.

    - by Tyler
    SELECT Boats.id, Boats.date, Boats.section, Boats.raft, river_company.company, river_section.section AS river FROM Boats INNER JOIN river_company ON Boats.raft = river_company.id INNER JOIN river_section ON Boats.section = river_section.id ORDER BY Boats.date DESC, river, river_company.company Returns everything I need. But how would I add a [Photos] table and count how many times Boats.id occurs in it and add that to the returned rows. So if there are 5 photos for boat #17 I want the record for boat #17 to say PhotoCount = 5

    Read the article

  • SQL Query: How to determine "Seen during N hour" if given two DateTime time stamps?

    - by efess
    Hello all. I'm writing a statistics application based off a SQLite database. There is a table which records when users Login and Logout (SessionStart, SessionEnd DateTimes). What i'm looking for is a query that can show what hours user have been logged in, in sort of a line graph way- so between the hours of 12:00 and 1:00AM there were 60 users logged in (at any point), between the hours of 1:00 and 2:00AM there were 54 users logged in, etc... And I want to be able to run a SUM of this, which is why I can't bring the records into .NET and iterate through them that way. I've come up with a rather primative approach, a subquery for each hour of the day, however this approach has proved to be slow and slow. I need to be able to calculate this for a couple hundred thousand records in a split second.. SELECT case when (strftime('%s',datetime(date(sessionstart), '+0 hours')) > strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+0 hours')) < strftime('%s',sessionend)) OR (strftime('%s',datetime(date(sessionstart), '+1 hours')) > strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+1 hours')) < strftime('%s',sessionend)) OR (strftime('%s',datetime(date(sessionstart), '+0 hours')) < strftime('%s',sessionstart) AND strftime('%s',datetime(date(sessionstart), '+1 hours')) > strftime('%s',sessionend)) then 1 else 0 end as hour_zero, ... hour_one, ... hour_two, ........ hour_twentythree FROM UserSession I'm wondering what better way to determine if two DateTimes have been seen durring a particular hour (best case scenario, how many times it has crossed an hour if it was logged in multiple days, but not necessary)? The only other idea I had is have a "hour" table specific to this, and just tally up the hours the user has been seen at runtime, but I feel like this is more of a hack than the previous SQL. Any help would be greatly appreciated!

    Read the article

  • SQL Get UID when Group by

    - by Quandary
    I do a select from table [V_RPT_BelegungKostenstelleDetail] WHERE SO_UID = '7C7035C8-56DD-4A44-93CC-F16FD66280A3' AND GB_UID = '4FF1B0EE-A5DD-4699-94B7-760922666CE2' AND GS_UID = '1188759A-54E1-4323-8BF2-85E71B3C796E' AND RM_UID = '088C3559-6E6E-468A-9554-6740840FCBA1' AND NA_UID= '96A2A8DB-8C83-4C60-9060-F0F55719AF5C' GROUP BY KST_UID How can I get SO_UID? It is the same everywhere, but I get an error when I try to get SO_UID with the return values... SO_UID is not necessarely given like '7C7035C8-56DD-4A44-93CC-F16FD66280A3' here, so I can't just add it manually as string.

    Read the article

  • How can I script the deployment of a Visual Studio database project?

    - by Matt
    How can I script the deployment of a Visual Studio database project? I have a DB project in visual studio and I would like to use it in order to deploy on remote machines via a script. I notice that when I 'deploy' from visual studio, it generates a .sql file. I intercepted this file and tried running it from the command line with osql.exe, but I didn't have any success. Should this work, is there a better way to deploy a database programatically from a database project, may be by referencing in another project and calling some method to deploy it?

    Read the article

  • Best way to randomly select columns from random rows of SQL results.

    - by LesterDove
    A search of SO yields many results describing how to select random rows of data from a database table. My requirement is a bit different, though, in that I'd like to select individual columns from across random rows in the most efficient/random/interesting way possible. To better illustrate: I have a large Customers table, and from that I'd like to generate a bunch of fictitious demo Customer records that aren't real people. I'm thinking of just querying randomly from the Customers table, and then randomly pairing FirstNames with LastNames, Address, City, State, etc. So if this is my real Customer data (simplified): FirstName LastName State ========================== Sally Simpson SD Will Warren WI Mike Malone MN Kelly Kline KS Then I'd generate several records that look like this: FirstName LastName State ========================== Sally Warren MN Kelly Malone SD Etc. My initial approach works, but it lacks the elegance that I'm hoping the final answer will provide. (I'm particularly unhappy with the repetitiveness of the subqueries, and the fact that this solution requires a known/fixed number of fields and therefore isn't reusable.) SELECT FirstName = (SELECT TOP 1 FirstName FROM Customer ORDER BY newid()), LastName= (SELECT TOP 1 LastNameFROM Customer ORDER BY newid()), State = (SELECT TOP 1 State FROM Customer ORDER BY newid()) Thanks!

    Read the article

  • Avoiding Nested Queries

    - by Midhat
    How Important is it to avoid nested queries. I have always learnt to avoid them like a plague. But they are the most natural thing to me. When I am designing a query, the first thing I write is a nested query. Then I convert it to joins, which sometimes takes a lot of time to get right. And rarely gives a big performance improvement (sometimes it does) So are they really so bad. Is there a way to use nested queries without temp tables and filesort

    Read the article

  • SQL SELECT across two tables

    - by Brett Spurrier
    Hi there, I am a little confused as to how to approach this SQL query. I have two takes (equal number of records), and I would like to return a column with which is the division between the two. In other words, here is my not-working-correctly query: SELECT( (SELECT v FROM Table1) / (SELECT DotProduct FROM Table2) ); How would I do this? All I want it a column where each row equals the same row in Table1 divided by the same row in Table2. The resulting table should have the same number of rows, but I am getting something with a lot more rows than the original two tables. I am at a complete loss. Any advice?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >