Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 297/1156 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • How do I configure SQL Server to allow other users to access a database?

    - by Zian Choy
    Environment: Windows 7 Ultimate SQL Server 2005 Express 2 users on the computer I tried making the 2nd user a user in SQL Server (THINKPAD\2ndUser) and adding him to the database ("2ndUser"). Then, I logged in as 2ndUser and started Visual Studio 2008. When I tried to connect to the database, I got the following error message: The database '<bleep>' does not exist or you do not have permission to see it.

    Read the article

  • SQL Server on VMWare - is transaction log corruption possible?

    - by demp
    Scenario: SQL Server 2005 or 2008, Windows 2008 OS. Running in a VM hosted on VMWare ESX server. Is there any known issue with VMWare when it caches pass-through write request and it never reaches the disk, while SQL Server "thinks" that write actually happened? This may lead to transaction log corruption in case of power failure or VM reboot. Just overheard the conversation but couldn't find it in relation to ESX.

    Read the article

  • does log4net AdoNetAppender support sql server 2008?

    - by schrodinger's code
    my config file below: very strange, i have spent a day to find out where i am wrong, but still not working, it still not log anything in the database,but i can output them using RollingFileAppender. Also, the store procedure WriteLog is working well.(I have tested it using sql server studio). I have tried to change the connectionType but not working. Unfortunately I dont have sql server 2000/2005 to test, my log4net version should be the latest one: log4net 1.2.10. Any help is appreciated. <?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="AdoNetAppender_SqlServer" type="log4net.Appender.AdoNetAppender"> <!--<threshold value="OFF" />--> <bufferSize value="1" /> <connectionType value="System.Data.SqlClient.SqlConnection, System.Data, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <!--<connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.0.3300.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" />--> <connectionString value="Data Source=.\MSSQLSERVER2008,2222;Initial Catalog=UnleashedSaaS;User ID=sa;Password=dogblack;" /> <commandType value="StoredProcedure" /> <commandText value="WriteLog" /> <parameter> <parameterName value="@log_date" /> <dbType value="DateTime" /> <layout type="log4net.Layout.PatternLayout" value="%date{yyyy'-'MM'-'dd HH':'mm':'ss'.'fff}" /> </parameter> <parameter> <parameterName value="@thread" /> <dbType value="String" /> <size value="255" /> <layout type="log4net.Layout.PatternLayout" value="%thread" /> </parameter> <parameter> <parameterName value="@log_level" /> <dbType value="String" /> <size value="50" /> <layout type="log4net.Layout.PatternLayout" value="%level" /> </parameter> <parameter> <parameterName value="@logger" /> <dbType value="String" /> <size value="255" /> <layout type="log4net.Layout.PatternLayout" value="%logger" /> </parameter> <parameter> <parameterName value="@message" /> <dbType value="String" /> <size value="4000" /> <layout type="log4net.Layout.PatternLayout" value="%message" /> </parameter> <parameter> <parameterName value="@exception" /> <dbType value="String" /> <size value="4000" /> <layout type="log4net.Layout.ExceptionLayout" /> </parameter> </appender> <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender" > <!--<threshold value="OFF" />--> <file value="LogData\\" /> <appendToFile value="true" /> <datePattern value="ul_yyyy-MM-dd.LOG" /> <maxSizeRollBackups value="10" /> <rollingStyle value="Date" /> <maximumFileSize value="2MB" /> <staticLogFileName value="false" /> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%d{yyyy-MM-dd HH:mm:ss} %p %u %c %l %m %n%n%n" /> </layout> </appender> <root> <level value="ALL"/> <appender-ref ref="AdoNetAppender_SqlServer" /> <appender-ref ref="RollingLogFileAppender" /> </root> </log4net> </configuration>

    Read the article

  • SQL Server devs–what source control system do you use, if any? (answer and maybe win free stuff)

    - by jamiet
    Recently I noticed a tweet from notable SQL Server author and community dude-at-large Steve Jones in which he asked how many SQL Server developers were putting their SQL Server source code (i.e. DDL) under source control (I’m paraphrasing because I can’t remember the exact tweet and Twitter’s search functionality is useless). The question surprised me slightly as I thought a more pertinent question would be “how many SQL Server developers are not using source control?” because I have been doing just that for many years now and I simply assumed that use of source control is a given in this day and age. Then I started thinking about it. “Perhaps I’m wrong” I pondered, “perhaps the SQL Server folks that do use source control in their day-to-day jobs are in the minority”. So, dear reader, I’m interested to know a little bit more about your use of source control. Are you putting your SQL Server code into a source control system? If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? Why did you make those particular software choices? Any interesting anecdotes to share in regard to your use of source control and SQL Server? To encourage you to contribute I have five pairs of licenses for Red Gate SQL Source Control and Red Gate SQL Connect to give away to what I consider to be the five best replies (“best” is totally subjective of course but this is my blog so my decision is final ), if you want to be considered don’t forget to leave contact details; email address, Twitter handle or similar will do. To start you off and to perhaps get the brain cells whirring, here are my answers to the questions above: Are you putting your SQL Server code into a source control system? As I think I’ve already said…yes. Always. If so, what source control server software (e.g. TFS, Git, SVN, Mercurial, SourceSafe, Perforce) are you using? I move around a lot between many clients so it changes on a fairly regular basis; my current client uses Team Foundation Server (aka TFS) and as part of a separate project is trialing the use of Team Foundation Service. I have used SVN extensively in the past which I am a fan of (I generally prefer it to TFS) and am trying to get my head around Git by using it for ObjectStorageHelper. What source control client software are you using (e.g. TFS Team Explorer, Tortoise, Red Gate SQL Source Control, Red Gate SQL Connect, Git Bash, etc…)? On my current project, Team Explorer. In the past I have used Tortoise to connect to SVN. Why did you make those particular software choices? I generally use whatever the client uses and given that I work with SQL Server I find that the majority of my clients use TFS, I guess simply because they are Microsoft development shops. Any interesting anecdotes to share in regard to your use of source control and SQL Server? Not an anecdote as such but I am going to share some frustrations about TFS. In many ways TFS is a great product because it integrates many separate functions (source control, work item tracking, build agents) into one whole and I’m firmly of the opinion that that is a good thing if for no reason other than being able to associate your check-ins with a work-item. However, like many people there are aspects to TFS source control that annoy me day-in, day-out. Chief among them has to be the fact that it uses a file’s read-only property to determine if a file should be checked-out or not and, if it determines that it should, it will happily do that check-out on your behalf without you even asking it to. I didn’t realise how ridiculous this was until I first used SVN about three years ago – with SVN you make any changes you wish and then use your source control client to determine which files have changed and thus be checked-in; the notion of “check-out” doesn’t even exist. That sounds like a small thing but you don’t realise how liberating it is until you actually start working that way. Hoping to hear some more anecdotes and opinions in the comments. Remember….free software is up for grabs! @jamiet 

    Read the article

  • Database Developers Can Now Save 20%

    - by stephen.garth
    Database developers can now increase productivity and save money at the same time. For a limited time, Oracle Store is offering a 20% discount on Oracle SQL Developer Data Modeler. Just enter the code SQLDDM at checkout to get the discount. Oracle SQL Developer Data Modeler is an independent, standalone product with a full spectrum of data and database modeling tools and utilities, including modeling for Entity Relationship Diagrams (ERD), Relational (database design), Data Type and Multi-dimensional modeling, full forward and reverse engineering and DDL code generation. SQL Developer Data Modeler can connect to any supported Oracle Database and is platform independent. Save 20% on Oracle SQL Developer Data Modeler at Oracle Store - Discount Code SQLDDM Find out more about Oracle SQL Developer and Oracle SQL Developer Data Modeler var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • How to factor out data layer in nopCommerce and replace MS SQL with RavenDB?

    - by Kaveh Shahbazian
    I am new to nopCommerce and ecommerce in general but I am involved in an ecommerce project. Now from my past experiences with RavenDB (which mostly were absolutely pleasant) and based on the needs of the business (fast changes with awkward business workflows) It seemed to be an appealing option to have RavenDB handling all sort of things related to the database. I do not understand design and architecture of nopCommerce fully so I did not reach to a conclusion on how to factor data parts, since it seems the services layer actually does not abstract data-layer concepts away; like bringing in EF working model to other layers. I have found another project which used NuDB as it's database as a nopCommerce fork. But it did not help because NuDB still has the feeling of a RDBMS and is not as different as RavenDB. Now first how can I learn about the internals of nopCommerce (other than investigating the code)? It's workflows? It's conventions? Second has anyone tried something similar before with a NoSQL database (say like MongoDB or RavenDB)? Is it possible to achieve this in a 1 (~2) month time frame? Thanks in advance;

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • Connect to localdb using Sql Server management studio

    - by Magnus Karlsson
    I was trying to find my databse for local db under localhost etc but no luck. The following led me to just connect to it, kind of obvious really when you look at your connections string but.. its sunday morning or something.. From: http://blogs.msdn.com/b/sqlexpress/archive/2011/07/12/introducing-localdb-a-better-sql-express.aspx High-Level Overview After the lengthy introduction it's time to take a look at LocalDB from the technical side. At a very high level, LocalDB has the following key properties: LocalDB uses the same sqlservr.exe as the regular SQL Express and other editions of SQL Server. The application is using the same client-side providers (ADO.NET, ODBC, PDO and others) to connect to it and operates on data using the same T-SQL language as provided by SQL Express. LocalDB is installed once on a machine (per major SQL Server version). Multiple applications can start multiple LocalDB processes, but they are all started from the same sqlservr.exe executable file from the same disk location. LocalDB doesn't create any database services; LocalDB processes are started and stopped automatically when needed. The application is just connecting to "Data Source=(localdb)\v11.0" and LocalDB process is started as a child process of the application. A few minutes after the last connection to this process is closed the process shuts down. LocalDB connections support AttachDbFileName property, which allows developers to specify a database file location. LocalDB will attach the specified database file and the connection will be made to it.

    Read the article

  • SQL Saturday #220 - Atlanta - Pre-Conference Scholarships!

    - by Most Valuable Yak (Rob Volk)
    We Want YOU…To Learn! AtlantaMDF and Idera are teaming up to find a few good people. If you are: A student looking to work in the database or business intelligence fields A database professional who is between jobs or wants a better one A developer looking to step up to something new On a limited budget and can’t afford professional SQL Server training Able to attend training from 9 to 5 on May 17, 2013 AtlantaMDF is presenting 5 Pre-Conference Sessions (pre-cons) for SQL Saturday #220! And thanks to Idera’s sponsorship, we can offer one free ticket to each of these sessions to eligible candidates! That means one scholarship per Pre-Con! One Recipient Each will Attend: Denny Cherry: SQL Server Security http://sqlsecurity.eventbrite.com/ Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance http://surfmulticore.eventbrite.com/ Stacia Misner: Languages of BI http://languagesofbi.eventbrite.com/ Bill Pearson: Practical Self-Service BI with PowerPivot for Excel http://selfservicebi.eventbrite.com/ Eddie Wuerch: The DBA Skills Upgrade Toolkit http://dbatoolkit.eventbrite.com/ If you are interested in attending these pre-cons send an email by April 30, 2013 to [email protected] and tell us: Why you are a good candidate to receive this scholarship Which sessions you’d like to attend, and why (list multiple sessions in order of preference) What the session will teach you and how it will help you achieve your goals The emails will be evaluated by the good folks at Midlands PASS in Columbia, SC. The recipients will be notified by email and announcements made on May 6, 2013. GOOD LUCK! P.S. - Don't forget that SQLSaturday #220 offers free* training in addition to the pre-cons! You can find more information about SQL Saturday #220 at http://www.sqlsaturday.com/220/eventhome.aspx. View the scheduled sessions at http://www.sqlsaturday.com/220/schedule.aspx and register for them at http://www.sqlsaturday.com/220/register.aspx. * Registration charges a $10 fee to cover lunch expenses.

    Read the article

  • SQL Server: How do I generate the table schema and populate it with inserts in a script?

    - by Paula DiTallo
    Originally posted on: http://geekswithblogs.net/AskPaula/archive/2014/05/20/156469.aspx In SSMS, there's a Generate Script utility (read:  only available under version 2008 and up) . Here are the steps you would need to take to make use of the utility: Right click on the database you're interested in and go to Tasks -> Generate ScriptsSelect the tables and/or any other objects you'd like in order to get them into the script.Navigate to Set scripting options. Click on Advanced.Under the General category, navigate to Type of data to scriptSelect the Schema and Data option to get the insert statements generated. Click OK.

    Read the article

  • Visual Basic link to SQL output to Word

    - by CLO_471
    I am in need of some advice/references. I am currently trying to develop a legal document interface. There are certain fields in which I need to query out of my sql db and have those fields output into a document that can be printed. I am trying to develop a user interface where people can enter fields that will output to a document template but at the same time I need the template to be able to pull data from the SQL database. This is the reason why I think that VB might be my best choice and because it is one of the only OOP languages I am familiar with presently. Does anyone know that best way to be able to handle this type of job?? I know that you can use VBA within MS Word and have the form output variables to a word template. But, is there a way to have the word document also pull information from the SQL db? Is the best option to use VB linked to SQL and run queries to get the information from the database and then have it output to a for within VB? Is it possible for VB to be linked to a SQL db and output variables and SQL fields to a Word Template? I have looked into Mail Merge and I see that it allows users to pull data from an Access query but I dont think it would be easy to automate and it seems that users would need to have an advanced knowledge of MS Word and Access to handle this. I am not finding much useful information online so I came here. Any advice or references would be greatly appreciated. If there is a better way please let me know.

    Read the article

  • Reading a large SQL Errorlog

    - by steveh99999
    I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog. ie This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog. This meant…  huge SQL Server errorlogs. Opening an errorlog in the usual way, using SQL management studio, was extremely slow… Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..   To show errorlog entries from the currently active log, just for today :- DECLARE @now DATETIME DECLARE @midnight DATETIME SET @now = GETDATE() SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0) EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now   To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends' FROM #temp_errorlog DROP TABLE #temp_errorlog To show just DBCC history  information in the current errorlog :- EXEC xp_readerrorlog 0,1,'dbcc'   To show backup errorlog entries in the current errorlog :- CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000)) INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate DROP TABLE #temp_errorlog XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

    Read the article

  • SQL Server 2008 R2: These are a Few of My Favorite Things

    - by smisner
    This month's T-SQL Tuesday is hosted by Jorge Segarra (blog | twitter) who decided that we should write about our favorite new feature in SQL Server 2008 R2. The majority of my published works concentrates on Reporting Services, so the obious answer for me is about favorite new features is...Reporting Services. I can't pick just one thing in Reporting Services, so instead I thought I'd compile a list of my posts of the new features in Reporting Services 2008 R2: Map Wizard for spatial data (The World is But a Stage) Pagination features (I've Got Your Page Number) Lookup functions (Look Up, Look Down, Look All Around - Part I, Part II, Part III) Test Connection button (Testing, Testing 1-2-3) Conditional formatting based on format, i.e. RenderFormat (As You Like It) And I wrote an overview of the business intelligence features in SQL Server 2008 R2 for Microsoft Press in the free e-book, Introducing Microsoft SQL Server 2008 R2, if you're curious about what else is new in both the BI platform as well as the relational engine. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • OLL Live webcast - Using SQL for Pattern Matching in Oracle Database

    - by KLaker
    If you are interested in learning about our exciting new 12c SQL pattern matching feature then mark your diaries. On Wednesday, October 30th at 8:00 am (US/Pacific time zone) Supriya Ananth, who is one of our top curriculum developers at Oracle, will be hosting an OLL webcast on our new SQL pattern matching feature. The ability to recognize patterns in a sequence of rows has been a capability that was widely desired, but not possible with SQL until now. Row pattern matching in native SQL improves application and development productivity and query efficiency for row-sequence analysis. With Oracle Database 12c you can use the new MATCH_RECOGNIZE clause to perform pattern matching in SQL to do the following: Logically partition and order the data using the PARTITION BY and ORDER BY clauses Use regular expressions syntax to define patterns of rows to seek using the PATTERN clause. These patterns a powerful and expressive feature, applied to the pattern variables you define. Specify the logical conditions required to map a row to a row pattern variable in the DEFINE clause. Define measures, which are expressions usable in the MEASURES clause of the SQL query. For more information and to register for this exciting webcast please visit the OLL Live website, see here: https://apex.oracle.com/pls/apex/f?p=44785:145:116820049307135::::P145_EVENT_ID,P145_PREV_PAGE:461,143.  Please note - if the above link does not work then go to OLL (https://apex.oracle.com/pls/apex/f?p=44785:1:) and click the OLL Live icon (upper right, beneath the Login link or logout link if you are already logged in). The pattern matching webcast is listed on the calendar of events on 30 October.

    Read the article

  • At what size of data does it become beneficial to move from SQL to NoSQL?

    - by wobbily_col
    As a relational database programmer (most of the time), I read articles about how relational databases don't scale, and NoSQL solutions such as MongoDB do. As most of the databases I have developed so far have been small to mid scale, I have never had a problem that hasn't been solved by some indexing, query optimization or schema redesign. What sort of size would I expect to see MySQL struggling with. How many rows? (I know this is going to depend on the application, and type of data stored. the one that got me thing was basically a genetics database, so would have one main table, with 3 or 4 lookup tables. The main table will contain amongst other things, a chromosome reference, and a position coordinate. It will likely get queried for a number of entries between two potions on a chromosome, to see what is stored there).

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Could someone help me understand SQL TDE Database encryption?

    - by SLC
    I don't quite follow how it works. According to the MSDN Article there is a big hierarchy of keys protecting other keys and passwords. At some point the database is encrypted. You query the database which is encrypted, and it works seamlessly. If you're able to simply connect to the database as normal and not have to worry about any of the encryption from a developer point of view, how exactly is it secure? Surely anyone can simply connect and do select * from x and the data is revealed. Sorry my question is a bit scattered, I am just very confused by the article.

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

  • Is it Considered Good SQL practice to use GUID to link multiple tables to same Id field?

    - by Mallow
    I want to link several tables to a many-to-many(m2m) table. One table would be called location and this table would always be on one side of the m2m table. But I will have a list of several tables for example: Cards Photographs Illustrations Vectors Would using GUID's between these tables to link it to a single column in another table be considered 'Good Practice'? Will Mysql let me to have it automatically cascade updates and delete? If so, would multiple cascades lead to an issues? UPDATE I've read that GUID (a hex number) Generally takes up more space in a database and slows queries down. However I could still generate 'unique' ids by just having the table initial's as part of the id so that the table card's id would be c0001, and then Illustrations be I001. Regardless of this change, the questions still stands.

    Read the article

  • Automating the Backup of a SQL Server 2008 Express Database

    - by JaydPage
    Steps Involved: 1) Create a Database Backup Script. 2) Create a Scheduled Task To Run the Backup Script. 1 Create a Database Backup Script. a) Download and install SQL Server Management Studio. This is a free tool available on the Microsoft website. b) Once Management Studio is installed launch it and connect to the SQL server instance that contains the database that you want to back up. c) Right click on the database and then in the menu choose Tasks -> Back up... d) This will open up a window where you can choose your backup options, once you are happy with the options click on the "Script" button near the top and select the "Script Action to File" option. e) Save the File. 2 Create a Schedule Task to Run the Backup Script a) Open up Windows Task Scheduler. b) Create a new Task using the wizard, when asked to select a program browse to C:\Program Files\Microsoft SQL Server\100\Tools\binn\SQLCMD.exe c) There are 2 arguments that need to be set: -S \SERVER_INSTANCE_NAME  -i "PATH_OF_SQLBACKUP_SCRIPT" where SERVER_INSTANCE_NAME  is the name of the instance of SQL server that contains your database e.g. (local) and PATH_OF_SQLBACKUP_SCRIPT is the path of your backup script e.g. "C:\Program Files\Microsoft SQL Server\DatastoreBackup.sql" d) Adjust the task to run at the desired times and you are done.

    Read the article

  • Migration Guide: Migrating to SQL Server 2012 Failover Clustering and Availability Groups from Prior Clustering and Mirroring Deployments

    This paper provides guidance for customers who prior to SQL Server 2012 have deployed SQL Failover Clustering for local high availability and database mirroring for disaster recovery, and who want to migrate to SQL Server AlwaysOn. It describes the corresponding SQL Server AlwaysOn scenario and the migration paths to SQL Server AlwaysOn. It also contains the important knowledge and considerations that you must know in order to successfully migrate to a HADR solution based on SQL Server AlwaysOn technology, which implements AlwaysOn Failover Cluster Instances for high availability and AlwaysOn Availability Groups for disaster recovery.

    Read the article

  • SQL Azure table size

    - by user224564
    In mssql2005 when i want to get size of table in MBs, i use EXEC sp_spaceused 'table'. Is there any way to get space used by particular table in SQL Azure using some query or API?

    Read the article

  • LINQ to SQL SOUNDEX - possible?

    - by Steve Hayes
    Hello, I have done a little bit of research on this and looked through a few articles both here on StackOverflow as well as some blog posts, but haven't found an exact answer. I also read that it is possible to do it using the 4.0 framework, but have yet to find any supporting evidence. So my question, is it possible to perform SOUNDEX via a LINQ to SQL Query?

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >