Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 62/1448 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • help with t-sql data aggregation

    - by stackoverflowuser
    Based on the following table Area S1 S2 S3 S4 -------------------- A1 5 10 20 0 A2 11 19 15 20 A3 0 0 0 20 I want to generate an output that will give the number of columns not having "0". So the output would be Area S1 S2 S3 S4 Count ------------------------- A1 5 10 20 0 3 A2 11 19 15 20 4 A3 0 0 0 20 1

    Read the article

  • C++11: thread_local or array of OpenCL 1.2 cl_kernel objects?

    - by user926918
    I need to run several C++11 threads (GCC 4.7.1) parallely in host. Each of them needs to use a device, say a GPU. As per OpenCL 1.2 spec (p. 357): All OpenCL API calls are thread-safe75 except clSetKernelArg. clSetKernelArg is safe to call from any host thread, and is safe to call re-entrantly so long as concurrent calls operate on different cl_kernel objects. However, the behavior of the cl_kernel object is undefined if clSetKernelArg is called from multiple host threads on the same cl_kernel object at the same time. An elegant way would be to use thread_local cl_kernel objects and the other way I can think of is to use an array of these objects such that i'th thread uses i'th object. As I have not implemented these earlier I was wondering if any of the two are good or are there better ways of getting things done. TIA, S

    Read the article

  • Upgrading log shipping from 2005 to 2008 or 2008R2

    - by DavidWimbush
    If you're using log shipping you need to be aware of some small print. The general idea is to upgrade the secondary server first and then the primary server because you can continue to log ship from 2005 to 2008R2. But this won't work if you're keeping your secondary databases in STANDBY mode rather than IN RECOVERY. If you're using native log shipping you'll have some work to do. If you've rolled your own log shipping (ahem) you can convert a STANDBY database to IN RECOVERY like this:   restore database [dw]   with norecovery; and then change your restore code to use WITH NORECOVERY instead of WITH STANDBY. (Finally all that aggravation pays off!) You can either upgrade the secondary server in place or rebuild it. A secondary database doesn't actually get upgraded until you recover it so the log sequence chain is not broken and you can continue shipping from the primary. Just remember that it can take quite some time to upgrade a database so you need to factor that into the expectations you give people about how long it will take to fail over. For more details, check this out: http://msdn.microsoft.com/en-us/library/cc645954(SQL.105).aspx

    Read the article

  • Introducing SSIS Reporting Pack for SQL Server code-named Denali

    - by jamiet
    In recent blog posts I have introduced the new SSIS Catalog that is forthcoming in SQL Server Code-named Denali: What's new in SSIS in Denali Introduction to SSIS Projects in Denali Parameters in SSIS In Denali SSIS Server, Catalogs, Environments and Environment Variables in SSIS in Denali The SSIS Catalog is responsible for executing SSIS packages and also for capturing the metadata from those executions. However, at the time of writing there is no mechanism provided to view analyse and drill into that metadata and that is the reason that I am, in this blog post, introducing a suite of SSIS Catalog reports called the SSIS Reporting Pack which you can download from my SkyDrive at http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. In this first release the SSIS Reporting Pack includes five reports: Catalog – A high-level summary of all activity in the Catalog Folders – A summary of activity in each Catalog Folder Folder – Project-level activity per single Folder Executions – A visualisation of all executions per Folder/Project/Package/Environment or subset thereof Execution – Information about an individual execution Here is a screenshot of the Executions report: Notice that the SSIS Reporting Pack provides a visual overview of all executions in the Catalog. Each execution is represented as a bar on the bar chart, the success or otherwise of each execution is indicated by the colour of the bar and the execution time is indicated by the bar height. I have recorded a video that gives an overview of the SSIS Reporting which I have embedded below. If you are having any trouble viewing the video go see it at http://vimeo.com/17617974 I must stress that this is a very early version of the SSIS Reporting Pack and I am expecting it to change a lot over the coming year. I am very keen to get some feedback about this, specifically: let me know if anything does not work as you expect give me your feature requests The easiest way to get hold of of me for now is within the comments section of this blog post. That’s all for now. I hope the SSIS Reporting Pack proves useful and I look forward to hearing your feedback. Lastly, that download link again: http://cid-550f681dad532637.office.live.com/self.aspx/Public/SSIS%20Reporting%20Pack/SSISReportingPack%20v0.1.zip. @jamiet

    Read the article

  • SQL Server 2008 R2 Installation and the Phantom of SQL Server 2005 Express

    - by Davide Mauri
    Today I’ve happy started to install SQL Server 2008R2 on my development machine, which has this software installed Windows Server 2008 R2 Standard SQL Server 2008 SP1 CU5 Visual Studio 2008 SP1 BOL October 2009 AdventuresWorks2008 Databases SR4 Visual Studio 2010 RTM So, all the basic standard stuff. SQL Server 2008 R2 installation went smooth ‘till somewhere in the middle, where the rule engine checks that software pre-requisite are satisfied before starting to copy files. Here I had this @][@@[?!?! error: “The SQL Server 2005 Express Tools are installed. To continue, remove the SQL Server 2005 Express Tools.” Fun enough, I don’t have and I’ve never had SQL Server 2005 Express on my machine. Armed with patience I analyzed the install log here C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\yyyymmdd_hhmmss\Detail.txt and I’ve found that the rule “Sql2005SsmsExpressFacet” is the one in charge of this check and it look for existance of the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x86) HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Microsoft SQL Server\90\Tools\ShellSEM (on x64) In my registry I’ve found that key existsing, due to the installation of the uber-cool Red-Gate SQL Search. I removed the registry key and here it is! SQL Server 2008 R2 is installing while I’m writing this post. A note to Microsoft: can you please add more detailed information on the setup while such error happens. Just saying “you have SQL Server 2005 Express installed” is not enough. Please show us what the rule look for and why it has failed directly in the Detailed Report, so that we don’t have to spend time to look for the needle in the logs. Thanks! :) PS I did a side-by-side installation with the existing SQL Server 2008 instance. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Six in Six - SQL Server 2012 Webinars

    - by JustinL
    We're running six webinars over the next six months covering our experiences with SQL Server 2012 and customer deployments. I'm presenting the first on upgrading to SQL Server 2012 next month, subsequent sessions will be delivered by colleagues: NOVEMBER: SQL Server 2012 Upgrade Approach and considerations. Friday 23rd November 12:00 – 13:00 Present approaches for upgrade testing, managing risk and rollback. The session will include details on minimizing downtime and upgrading from SQL Server 2000, 2005, and 2008 including.... More details and register. DECEMBER: Delivering Mission Critical BI with SQL Server 2012– Friday 14th December 12:00-13:00 Information is the lifeblood of many organisations and the availability of timely, accurate information is critical to strategic decision making. This session covers the features and capabilities… More details and register. JANUARY: Architecting Highly Available solutions with SQL Server 2012 – Friday 18th January 12:00- 13:00 Overview and comparison of the high availability features available within SQL Server 2012. The session considers business requirements for availability and recoverability and presents a number of alternative solution designs to meet… More details and register. FEBRUARY: Private cloud deployments with SQL Server 2012 – Friday 15th February 12:00- 13:00 Cloud based technology provide cost effective scale and flexibility. This session provides an overview of the benefits organisations can realise through private cloud… More details and register. MARCH: Visualising data patterns with SQL Server 2012 – Friday 22nd March 12:00- 13:00 This webinar demonstrates the ease of delivering business insight by exploring information and identifying trends through data visualisation. SQL Server 2012 provides new capability with enhanced performance and … More details and register. APRIL: Architecting Highly Available solutions with SQL Server 2012 – Friday 26th April 12:00- 13:00 Customers are increasingly interested in leveraging the benefits of cloud based solutions to provide scalable and flexible infrastructure to host their applications. This session looks at common design patterns and workloads… More details and register. Justin Langford - Coeo Ltd SQL Server Consultants | SQL Server Remote DBA

    Read the article

  • Oracle SQL Developer: Fetching SQL Statement Result Sets

    - by thatjeffsmith
    Running queries, browsing tables – you are often faced with many thousands, if not millions, of rows. Most people are happy with looking at the first few rows. But occasionally you need to see more. SQL Developer doesn’t show you all records, all at once. Instead, it brings the records down in ‘chunks,’ or as-needed. How It Works There is a preference that tells SQL Developer how many records to get in a single request, or ‘fetch’ of records. The default is 50… So if I run a query that returns MORE than 50 rows: There’s more than 50 records in this resultset, but we have 50 in the grid to start with. We don’t know how many records are in this result set actually. To show the record count here, we actually go physically query the database with a row count type query. All we know is that the query has finished executing, and that there are rows available to go fetch. It tells us when it’s done. As you scroll through the grid, if you get to record 50 and scroll more, we’ll get 50 more records. Or, you can cheat to get to the ‘bottom’ of the result set. You can ask SQL Developer to just to get all the records at once… Once all the records have been fetched, you’ll see this: All rows fetched! A word of caution There’s a reason we have the default set to 50 and not 1000. Bringing back data can get expensive and heavy. We’ve found the best performance to be found in that 50 to 200 record range.

    Read the article

  • LINQ to SQL vs Entity Framework for an app with a future SQL Azure version

    - by Craig L
    I've got a vertical market Dot Net Framework 1.1 C#/WinForms/SQL Server 2000 application. Currently it uses ADO.Net and Microsoft's SQLHelper for CRUD operations. I've successfully converted it to Dot Net Framework 4 C#/WinForms/ SQL Server 2008. What I'd like to do is also offer my customers the ability to use SQL Azure as a backend storage for their data instead of local/LAN SQL Server. If I know SQL Azure is in my application's future, should I: A. Switch to LINQ to SQL B. Swith to Entity Framework C. Stick with ADO.Net and SQLHelper Thanks !

    Read the article

  • Database Firewall

    - by ???02
    Database Firewall?????SQL????????SQL????????????WEB?????HTTP??????SQL????????SQL????????????????????????????????????????????????????????????SQL??????????????????????????????????????·WEB???????????????????·??????????????????WEB???????????WEB??????????Web?????????????????IPA???????????SQL?????????????????????SQL??????????????????????SQL????????????????WEB??????????????????????????????????????????????????????????????????????????????????????????????WEB????????????????????????????????????????B to B?B to C???WEB????????????????????????????????????????????????????????????????????????????WEB?????????????????????WEB??????????????????·??????????????????????????????????????????????????????????????????????????????????????????WEB???????SQL?????????????????????????????Oracle Database Firewall???SQL??????????????????????????????????????????????????????????????????????????????2011?Oracle Database Firewall?????????·????????????????????????????????????????Oracle Database Firewall??????????????????SQL?????????·?????????????????????? Oracle Database Fireawall ?Oracle Database Firewall???????????????????????????????????????????????????????????SQL??????????????????????????????????????????2????????????????????SQL???Pass?Block????????????SQL?????????????????????????????????????????????????????????????????????????????????SQL????????????????????????????????????????????????????????????????????????????????????????????SQL?Oracle Database Firewall???????????????????SQL????????????????WEB??????????Oracle Database Firewall???????????????????????????????????Oracle Database??????SQL Server?DB2?Sybase??????????2????????Oracle Database Firewall?????????????????????????????????????????????????Oracle Database Firewall?????????????????????????????????????????????SQL???????????????????????????????????????????????????????????SPAN???(?????????)?????????????????????????????????SQL???????????????????????????????????????????SQL?Block???Pass??????????????????????????????????IDS?IPS????????????????????WAF (Web Application Firewall)? ??????????????????????????????????Database Firewall???????????????SQL????????????SQL????400????????????????(ISO/IEC 9075)??????????Oracle Database Firewall???????????????????????????????????????????SQL?????????????????????SQL??????Oracle Database Firewall??Oracle Database, SQL Server, DB2??????????????????????SQL???????????????????SQL??????????????????????????????????SQL???????????????????????????????????SQL????????????????????????????????????Oracle Database Firewall???SQL??????Oracle Database Firewall?????????????????????????????????????????????????????????????????????????????(Oracle Database???10gR2??XML??????????????????????????????????????????????????????????????Oracle Database???????????????????Oracle Database???????????????????????????????)???Oracle Database Firewall??????????????????????????????????????????????????????????????Oracle Database Firewall????????????????????????????????????????????????????????? ?????? Oracle Direct

    Read the article

  • Whether to use UNION or OR in SQL Server Queries

    - by Dinesh Asanka
    Recently I came across with an article on DB2 about using Union instead of OR. So I thought of carrying out a research on SQL Server on what scenarios UNION is optimal in and which scenarios OR would be best. I will analyze this with a few scenarios using samples taken  from the AdventureWorks database Sales.SalesOrderDetail table. Scenario 1: Selecting all columns So we are going to select all columns and you have a non-clustered index on the ProductID column. --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 874 So query 1 is using OR and the later is using UNION. Let us analyze the execution plans for these queries. Query 1 Query 2 As expected Query 1 will use Clustered Index Scan but Query 2, uses all sorts of things. In this case, since it is using multiple CPUs you might have CX_PACKET waits as well. Let’s look at the profiler results for these two queries: CPU Reads Duration Row Counts OR 78 1252 389 3854 UNION 250 7495 660 3854 You can see from the above table the UNION query is not performing well as the  OR query though both are retuning same no of rows (3854).These results indicate that, for the above scenario UNION should be used. Scenario 2: Non-Clustered and Clustered Index Columns only --Query 1 : OR SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR ProductID =709 OR ProductID =998 OR ProductID =875 OR ProductID =976 OR ProductID =874 GO --Query 2 : UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 709 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 998 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 875 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 976 UNION SELECT ProductID,SalesOrderID, SalesOrderDetailID FROM Sales.SalesOrderDetail WHERE ProductID = 874 GO So this time, we will be selecting only index columns, which means these queries will avoid a data page lookup. As in the previous case we will analyze the execution plans: Query 1 Query 2 Again, Query 2 is more complex than Query 1. Let us look at the profile analysis: CPU Reads Duration Row Counts OR 0 24 208 3854 UNION 0 38 193 3854 In this analyzis, there is only slight difference between OR and UNION. Scenario 3: Selecting all columns for different fields Up to now, we were using only one column (ProductID) in the where clause.  What if we have two columns for where clauses and let us assume both are covered by non-clustered indexes? --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2: As we can see, the query plan for the second query has improved. Let us see the profiler results. CPU Reads Duration Row Counts OR 47 1278 443 1228 UNION 31 1334 400 1228 So in this case too, there is little difference between OR and UNION. Scenario 4: Selecting Clustered index columns for different fields Now let us go only with clustered indexes: --Query 1 : OR SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 OR CarrierTrackingNumber LIKE 'D0B8%' --Query 2 : UNION SELECT * FROM Sales.SalesOrderDetail WHERE ProductID = 714 UNION SELECT * FROM Sales.SalesOrderDetail WHERE CarrierTrackingNumber  LIKE 'D0B8%' Query 1 Query 2 Now both execution plans are almost identical except is an additional Stream Aggregate is used in the first query. This means UNION has advantage over OR in this scenario. Let us see profiler results for these queries again. CPU Reads Duration Row Counts OR 0 319 366 1228 UNION 0 50 193 1228 Now see the differences, in this scenario UNION has somewhat of an advantage over OR. Conclusion Using UNION or OR depends on the scenario you are faced with. So you need to do your analyzing before selecting the appropriate method. Also, above the four scenarios are not all an exhaustive list of scenarios, I selected those for the broad description purposes only.

    Read the article

  • How to avoid the "divide by zero" error in SQL?

    - by Henrik Staun Poulsen
    I hate this error message: Msg 8134, Level 16, State 1, Line 1 Divide by zero error encountered. What is the best way to write SQL code, so that I will never see this error message again? I mean, I could add a where clause so that my divisor is never zero. Or I could add a case statement, so that there is a special treatment for zero. Is the best way to use a NullIf clause? Is there better way, or how can this be enforced?

    Read the article

  • Dynamic openrowset in T-Sql Function or viable alternative?

    - by IronicMuffin
    I'm not quite sure how to phrase this. Here is the problem: I have 1-n items that I need to join to a different system (AS400) to get some data. The openrowset takes forever if I specify the where criteria outside of the openrowset, e.g.: select * from openrowset('my connection string', 'select code, myfield from myTable') where code = @code My idea was to create a function that takes in the item number and uses dynamic sql to inject it into the openrowset string, a la: declare @cmd varchar(1000) set @cmd = 'select * from openrowset('my connection string', ''select code, myfield from myTable where code = ' + @code + ''')' Apparently I can't use the insert.. exec.. strategy inside of a function. Is there any better way to achieve this? I was going to use this in joins where I needed the external data using cross apply. I'm not married to tvf and cross apply, but I do need a method of getting this data quickly. Thanks for any help.

    Read the article

  • converting mysql database to sql server

    - by every_answer_gets_a_point
    i have a mysql database: /* MySQL Data Transfer Source Host: 10.0.0.5 Source Database: jnetdata Target Host: 10.0.0.5 Target Database: jnetdata Date: 5/26/2009 12:27:33 PM */ SET FOREIGN_KEY_CHECKS=0; -- ---------------------------- -- Table structure for chavrusas -- ---------------------------- CREATE TABLE `chavrusas` ( `id` int(11) NOT NULL auto_increment, `date_created` datetime default NULL, `luser_id` int(11) default NULL, `ruser_id` int(11) default NULL, `luser_type` varchar(50) default NULL, `ruser_type` varchar(50) default NULL, `SessionDay` varchar(250) default NULL, `SessionTime` datetime default NULL, `WeeklyReminder` tinyint(1) NOT NULL default '0', `reminder_phone` tinyint(1) NOT NULL default '0', `calling_card` varchar(50) default NULL, `active` tinyint(1) NOT NULL default '0', `notes` mediumtext, `ended` tinyint(1) NOT NULL default '0', `end_date` datetime default NULL, `initiated_by_student` tinyint(1) NOT NULL default '0', `initiated_by_volunteer` tinyint(1) NOT NULL default '0', `student_general_reason` varchar(50) default NULL, `volunteer_general_reason` varchar(50) default NULL, `student_reason` varchar(250) default NULL, `volunteer_reason` varchar(250) default NULL, `student_nli` tinyint(1) NOT NULL default '0', `volunteer_nli` tinyint(1) NOT NULL default '0', `jnet_initiated` tinyint(1) default '0', `belongs_to` varchar(50) default NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=5913 DEFAULT CHARSET=latin1; -- ---------------------------- -- Table structure for tbluseravailability -- ---------------------------- CREATE TABLE `tbluseravailability` ( `availability_id` int(11) NOT NULL auto_increment, `user_id` int(11) NOT NULL, `weekday_id` int(11) NOT NULL, `timeslot_id` int(11) NOT NULL, PRIMARY KEY (`availability_id`) ) ENGINE=MyISAM AUTO_INCREMENT=10865 DEFAULT CHARSET=latin1; -- ---------------------------- -- Table structure for tblusers -- ---------------------------- CREATE TABLE `tblusers` ( `id` int(11) NOT NULL auto_increment, `password` varchar(50) default NULL, `title` varchar(255) default NULL, `first` varchar(255) default NULL, `last` varchar(255) default NULL, `gender` varchar(255) default NULL, `address` varchar(255) default NULL, `address_2` varchar(255) default NULL, `city` varchar(255) default NULL, `state` varchar(255) default NULL, `postcode` varchar(255) default NULL, `country` varchar(255) default NULL, `email` varchar(255) default NULL, `emailnotes` varchar(255) default NULL, `Home_Phone` varchar(255) default NULL, `Office_Phone` varchar(255) default NULL, `Cell_Phone` varchar(255) default NULL, `Contact_Preference` varchar(255) default NULL, `Birthdate` datetime default NULL, `Age` varchar(255 and it goes on for about 10mb i need to convert it to ms sql, how do i do it?

    Read the article

  • Problem with duplicates in a SQL Join

    - by Chris Ballance
    I have the following result set from a join of three tables, an articles table, a products table, an articles to products mapping table. I would like to have the results with duplicates removed similar to a select distinct on content id. Current result set: [ContendId] [Title] [productId] 1 article one 2 1 article one 3 1 article one 9 4 article four 1 4 article four 10 4 article four 14 5 article five 1 6 article six 8 6 article six 10 6 article six 11 6 article six 13 7 article seven 14 Desired result set: [ContendId] [Title] [productId] 1 article one * 4 article four * 5 article five * 6 article six * 7 article seven * Here is condensed example of the relevant SQL: IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'tempdb.dbo.products') AND type = (N'U')) drop table tempdb.dbo.products go CREATE TABLE tempdb.dbo.products ( productid int, productname varchar(255) ) go IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articles') AND type = (N'U')) drop table tempdb.dbo.articles go create table tempdb.dbo.articles ( contentid int, title varchar(255) ) IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articles') AND type = (N'U')) drop table tempdb.dbo.articles go create table tempdb.dbo.articles ( contentid int, title varchar(255) ) IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'articleproducts') AND type = (N'U')) drop table tempdb.dbo.articleproducts go create table tempdb.dbo.articleproducts ( contentid int, productid int ) insert into tempdb.dbo.products values (1,'product one'), (2,'product two'), (3,'product three'), (4,'product four'), (5,'product five'), (6,'product six'), (7,'product seven'), (8,'product eigth'), (9,'product nine'), (10,'product ten'), (11,'product eleven'), (12,'product twelve'), (13,'product thirteen'), (14,'product fourteen') insert into tempdb.dbo.articles VALUES (1,'article one'), (2, 'article two'), (3, 'article three'), (4, 'article four'), (5, 'article five'), (6, 'article six'), (7, 'article seven'), (8, 'article eight'), (9, 'article nine'), (10, 'article ten') INSERT INTO tempdb.dbo.articleproducts VALUES (1,2), (1,3), (1,9), (4,1), (4,10), (4,14), (5,1), (6,8), (6,10), (6,11), (6,13), (7,14) GO select DISTINCT(a.contentid), a.title, p.productid from articles a JOIN articleproducts ap ON a.contentid = ap.contentid JOIN products p ON a.contentid = ap.contentid AND p.productid = ap.productid ORDER BY a.contentid

    Read the article

  • Better way to summarize data about stop times?

    - by Vimvq1987
    This question is close to this: http://stackoverflow.com/questions/2947963/find-the-period-of-over-speed Here's my table: Longtitude Latitude Velocity Time 102 401 40 2010-06-01 10:22:34.000 103 403 50 2010-06-01 10:40:00.000 104 405 0 2010-06-01 11:00:03.000 104 405 0 2010-06-01 11:10:05.000 105 406 35 2010-06-01 11:15:30.000 106 403 60 2010-06-01 11:20:00.000 108 404 70 2010-06-01 11:30:05.000 109 405 0 2010-06-01 11:35:00.000 109 405 0 2010-06-01 11:40:00.000 105 407 40 2010-06-01 11:50:00.000 104 406 30 2010-06-01 12:00:00.000 101 409 50 2010-06-01 12:05:30.000 104 405 0 2010-06-01 11:05:30.000 I want to summarize times when vehicle had stopped (velocity = 0), include: it had stopped since "when" to "when" in how much minutes, how many times it stopped and how much time it stopped. I wrote this query to do it: select longtitude, latitude, MIN(time), MAX(time), DATEDIFF(minute, MIN(Time), MAX(time)) as Timespan from table_1 where velocity = 0 group by longtitude,latitude select DATEDIFF(minute, MIN(Time), MAX(time)) as minute into #temp3 from table_1 where velocity = 0 group by longtitude,latitude select COUNT(*) as [number]from #temp select SUM(minute) as [totaltime] from #temp3 drop table #temp This query return: longtitude latitude (No column name) (No column name) Timespan 104 405 2010-06-01 11:00:03.000 2010-06-01 11:10:05.000 10 109 405 2010-06-01 11:35:00.000 2010-06-01 11:40:00.000 5 number 2 totaltime 15 You can see, it works fine, but I really don't like the #temp table. Is there anyway to query this without use a temp table? Thank you.

    Read the article

  • Generated LinqtoSql Sql 5x slower than SAME EXACT hand-written sql

    - by JasonM
    I have a sql statement which is hardcoded in an existing VB6 app. I'm upgrading a new version in C# and using Linq To Sql. I was able to get LinqToSql to generate the same sql (before I start refactoring), but for some reason the Sql generated by LinqToSql is 5x slower than the original sql. This is running the generated Sql Directly in LinqPad. The only real difference my meager sql eyes can spot is the WITH (NOLOCK), which if I add into the LinqToSql generated sql, makes no difference. Can someone point out what I'm doing wrong here? Thanks! Existing Hard Coded Sql (5.0 Seconds) SELECT DISTINCT CH.ClaimNum, CH.AcnProvID, CH.AcnPatID, CH.TinNum, CH.Diag1, CH.GroupNum, CH.AllowedTotal FROM Claims.dbo.T_ClaimsHeader AS CH WITH (NOLOCK) WHERE CH.ContractID IN ('123A','123B','123C','123D','123E','123F','123G','123H') AND ( ( (CH.Transmited Is Null or CH.Transmited = '') AND CH.DateTransmit Is Null AND CH.EobDate Is Null AND CH.ProcessFlag IN ('Y','E') AND CH.DataSource NOT IN ('A','EC','EU') AND CH.AllowedTotal > 0 ) ) ORDER BY CH.AcnPatID, CH.ClaimNum Generated Sql from LinqToSql (27.6 Seconds) -- Region Parameters DECLARE @p0 NVarChar(4) SET @p0 = '123A' DECLARE @p1 NVarChar(4) SET @p1 = '123B' DECLARE @p2 NVarChar(4) SET @p2 = '123C' DECLARE @p3 NVarChar(4) SET @p3 = '123D' DECLARE @p4 NVarChar(4) SET @p4 = '123E' DECLARE @p5 NVarChar(4) SET @p5 = '123F' DECLARE @p6 NVarChar(4) SET @p6 = '123G' DECLARE @p7 NVarChar(4) SET @p7 = '123H' DECLARE @p8 VarChar(1) SET @p8 = '' DECLARE @p9 NVarChar(1) SET @p9 = 'Y' DECLARE @p10 NVarChar(1) SET @p10 = 'E' DECLARE @p11 NVarChar(1) SET @p11 = 'A' DECLARE @p12 NVarChar(2) SET @p12 = 'EC' DECLARE @p13 NVarChar(2) SET @p13 = 'EU' DECLARE @p14 Decimal(5,4) SET @p14 = 0 -- EndRegion SELECT DISTINCT [t0].[ClaimNum], [t0].[acnprovid] AS [AcnProvID], [t0].[acnpatid] AS [AcnPatID], [t0].[tinnum] AS [TinNum], [t0].[diag1] AS [Diag1], [t0].[GroupNum], [t0].[allowedtotal] AS [AllowedTotal] FROM [Claims].[dbo].[T_ClaimsHeader] AS [t0] WHERE ([t0].[contractid] IN (@p0, @p1, @p2, @p3, @p4, @p5, @p6, @p7)) AND (([t0].[Transmited] IS NULL) OR ([t0].[Transmited] = @p8)) AND ([t0].[DATETRANSMIT] IS NULL) AND ([t0].[EOBDATE] IS NULL) AND ([t0].[PROCESSFLAG] IN (@p9, @p10)) AND (NOT ([t0].[DataSource] IN (@p11, @p12, @p13))) AND ([t0].[allowedtotal] > @p14) ORDER BY [t0].[acnpatid], [t0].[ClaimNum] New LinqToSql Code (30+ seconds... Times out ) var contractIds = T_ContractDatas.Where(x => x.EdiSubmissionGroupID == "123-01").Select(x => x.CONTRACTID).ToList(); var processFlags = new List<string> {"Y","E"}; var dataSource = new List<string> {"A","EC","EU"}; var results = (from claims in T_ClaimsHeaders where contractIds.Contains(claims.contractid) && (claims.Transmited == null || claims.Transmited == string.Empty ) && claims.DATETRANSMIT == null && claims.EOBDATE == null && processFlags.Contains(claims.PROCESSFLAG) && !dataSource.Contains(claims.DataSource) && claims.allowedtotal > 0 select new { ClaimNum = claims.ClaimNum, AcnProvID = claims.acnprovid, AcnPatID = claims.acnpatid, TinNum = claims.tinnum, Diag1 = claims.diag1, GroupNum = claims.GroupNum, AllowedTotal = claims.allowedtotal }).OrderBy(x => x.ClaimNum).OrderBy(x => x.AcnPatID).Distinct(); I'm using the list of constants above to make LinqToSql Generate IN ('xxx','xxx',etc) Otherwise it uses subqueries which are just as slow...

    Read the article

  • SQL Anywhere 11, JZ0C0: Connection is already closed

    - by Alex
    SLOVED see commend I develop am webservice based on apache tomcat 6.0.26, apache cxf 2.2.7, spring 3.0, hibernate 3.3 and sybase sqlanywhere 11. im using the latest JDBC Driver from SYBASE jconn.jar Version 6. The persistence layer is based on spring + hibernate dao, the connection is configured via a JNDI datasoure (META-INF directory). It seems that, during longer times of inactivity, the connection from the webservice to the database is closed. Exception: java.sql.SQLException: JZ0C0: Connection is already closed. Best regards, Alex

    Read the article

  • SQL Developer Debugging, Watches, Smart Data, & Data

    - by thatjeffsmith
    After presenting the SQL Developer PL/SQL debugger for about an hour yesterday at KScope12 in San Antonio, my boss came up and asked, “Now, would you really want to know what the Smart Data panel does?” Apparently I had ‘made up’ my own story about what that panel’s intent is based on my experience with it. Not good Jeff, not good. It was a very small point of my presentation, but I probably should have read the docs. The Smart Data tab displays information about variables, using your Debugger: Smart Data preferences. You can also specify these preferences by right-clicking in the Smart Data window and selecting Preferences. Debugger Smart Data Preferences, control number of variables to display The Smart Data panel auto-inspects the last X accessed variables. So if you have a program with 26 variables, instead of showing you all 26, it will just show you the last two variables that were referenced in your program. If you were to click on the ‘Data’ debug panel, you’ll see EVERYTHING. And if you only want to see a very specific set of values, then you should use Watches. The Smart Data Panel As I step through the code, the variables being tracked change as they are referenced. Only the most recent ones display. This is controlled by the ‘Maximum Locations to Remember’ preference. Step through the code, see the latest variables accessed The Data Panel All variables are displayed. Might be information overload on large PL/SQL programs where you have many dozens or even hundreds of variables to track. Shows everything all the time Watches Watches are added manually and only show what you ask for. Data on Demand – add a watch to track a specific variable Remember, you can interact with your data If you want to do more than just watch, you can mouse-right on a data element, and change the value of the variable as the program is running. This is one of the primary benefits to debugging over using DBMS_OUTPUT to track what’s happening in your program. Change the values while the program is running to test your ‘What if?’ scenarios

    Read the article

  • Using a .MDF SQL Server Database with ASP.NET Versus Using SQL Server

    - by Maxim Z.
    I'm currently writing a website in ASP.NET MVC, and my database (which doesn't have any data in it yet, it only has the correct tables) uses SQL Server 2008, which I have installed on my development machine. I connect to the database out of my application by using the Server Explorer, followed by LINQ to SQL mapping. Once I finish developing the site, I will move it over to my hosting service, which is a virtual hosting plan. I'm concerned about whether using the SQL Server setup that is currently working on my development machine will be hard to do on the production server, as I'll have to import all the database tables through the hosting control panel. I've noticed that it is possible to create a SQL Server database from inside Visual Studio. It is then stored in the App_Data directory. My questions are the following: Does it make sense to move my SQL Server DB out of SQL Server and into the App_Data directory as an .mdf file? If so, how can I move it? I believe this is called the Detach command, is it not? Are there any performance/security issues that can occur with a .mdf file like this? Would my intended setup work OK with a typical virtual hosting plan? I'm hoping that the .mdf database won't count against the limited number of SQL Server databases that can be created with my plan. I hope this question isn't too broad. Thanks in advance! Note: I'm just starting out with ASP.NET MVC and all this, so I might be completely misunderstanding how this is supposed to work.

    Read the article

  • SQL, moving million records from a database to other database [migrated]

    - by Ryoma
    I am a C# developer, I am not really good with SQL. I have a simple questions here. I need to move more than 50 millions records from a database to other database. I tried to use the import function in ms SQL, however it got stuck because the log was full (I got an error message The transaction log for database 'mydatabase' is full due to 'LOG_BACKUP'). The database recovery model was set to simple. My friend said that importing millions records using task-import data will cause the log to be massive and told me to use loop instead to transfer the data, does anyone know how and why? thanks in advance

    Read the article

  • Migrating MOSS 2007 from SQL 2000 to SQL 2005 - Addendum

    - by lunacrescens
    This is a continuation of an earlier question I had about moving the databases for a MOSS 2007 installation from SQL 2000 to SQL 2005. Here's the URL for the original question: http://stackoverflow.com/questions/254517/migrating-moss-2007-from-sql-2000-to-sql-2005 In my test environment, I've successfully moved the databases to the SQL 2005 test machine and things appear to be working fine. But, on the "Servers in Farm" page of the Central Admin | Operations, it still shows the old (i.e. SQL 2000) server as the Configuration Database Server. Also, it shows the old config database as being the Configuration Database. I know that the SQL2000 server and old config database (that are showing on this page) are NOT being used, because we've deactived the SQL instance in SQL2000. I've tried "removing" the server, and get a message about "Uninstalling SharePoint products and technologies" being the better route. So, I disconnected from the test databases, uninstalled SharePoint from the test WFE server, and reinstalled it. That didn't do anything. Before uninstalling/reinstalling I also tried simply rerunning the SharePoint Configuration wizard, and that didn't do anything either. Does anyone know how to update the Config Server and Config Database on the "Servers in Farm" page after having moved the Config and Content DBs? Is there something I'm missing or overlooking? Thanks.

    Read the article

  • sql server 2008 insert statement question

    - by user61752
    I am learning sql server 2008 t-sql. To insert a varchar type, I just need to insert a string 'abc', but for nvarchar type, I need to add N in front (N'abc'). I have a table employee, it has 2 fields, firstname and lastname, they are both nvarchar(20). insert into employee values('abc', 'def'); I test it, it works, seems like N is not required. Why we need to add N in front for nvarchar type, what's the pro or con if we are not using it?

    Read the article

  • Can I Have Polymorphic Containers With Value Semantics in C++11?

    - by John Dibling
    This is a sequel to a related post which asked the eternal question: Can I have polymorphic containers with value semantics in C++? The question was asked slightly incorrectly. It should have been more like: Can I have STL containers of a base type stored by-value in which the elements exhibit polymorphic behavior? If you are asking the question in terms of C++, the answer is "no." At some point, you will slice objects stored by-value. Now I ask the question again, but strictly in terms of C++11. With the changes to the language and the standard libraries, is it now possible to store polymorphic objects by value in an STL container? I'm well aware of the possibility of storing a smart pointer to the base class in the container -- this is not what I'm looking for, as I'm trying to construct objects on the stack without using new. Consider if you will (from the linked post) as basic C++ example: #include <iostream> using namespace std; class Parent { public: Parent() : parent_mem(1) {} virtual void write() { cout << "Parent: " << parent_mem << endl; } int parent_mem; }; class Child : public Parent { public: Child() : child_mem(2) { parent_mem = 2; } void write() { cout << "Child: " << parent_mem << ", " << child_mem << endl; } int child_mem; }; int main(int, char**) { // I can have a polymorphic container with pointer semantics vector<Parent*> pointerVec; pointerVec.push_back(new Parent()); pointerVec.push_back(new Child()); pointerVec[0]->write(); pointerVec[1]->write(); // Output: // // Parent: 1 // Child: 2, 2 // But I can't do it with value semantics vector<Parent> valueVec; valueVec.push_back(Parent()); valueVec.push_back(Child()); // gets turned into a Parent object :( valueVec[0].write(); valueVec[1].write(); // Output: // // Parent: 1 // Parent: 2 }

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >