Search Results

Search found 27530 results on 1102 pages for 'sql truncate'.

Page 396/1102 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • NLOG to output db.out

    - by Coppermill
    I would like to use nLog to output my LINQ to SQL generated SQL to the log file e.g. db.Log = Console.Out reports the generated SQL to the console, http://www.bryanavery.co.uk/post/2009/03/06/Viewing-the-SQL-that-is-generated-from-LINQ-to-SQL.aspx How can I get the log to log to NLOG?

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • Customize Entity Framework SSDL &amp; SQL Generation

    - by Dane Morgridge
    In almost every talk I have done on Entity Framework I get questions on how to do custom SSDL or SQL when using model first development.  Quite a few of these questions have required custom changes to the SSDL, which of course can be a problem if it is getting auto generated.  Luckily, there is a tool that can help.  In the Visual Studio Gallery on MSDN, there is the Entity Designer Database Generation Power Pack. You have the ability to select different generation strategies and it also allows you to inject custom T4 Templates into the generation workflow so that you can customize the SSDL and SQL generation.  When you select to generate a database from a model the dialog is replaced by one with more options:   You can clone the individual workflow for either the current project or current machine.  The templates are installed at “C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Entity Framework Tools\DBGen” on my local machine and you can make a copy of any template there.  If you clone the strategy and open it up, you will get the following workflow: Each item in the sequence is defining the execution of a T4 template.  The XAML for the workflow is listed below so you can see where the T4 files are defined.  You can simply make a copy of an existing template and make what ever changes you need.   1: <Activity x:Class="GenerateDatabaseScriptWorkflow" ... > 2: <x:Members> 3: <x:Property Name="Csdl" Type="InArgument(sde:EdmItemCollection)" /> 4: <x:Property Name="ExistingSsdl" Type="InArgument(s:String)" /> 5: <x:Property Name="ExistingMsl" Type="InArgument(s:String)" /> 6: <x:Property Name="Ssdl" Type="OutArgument(s:String)" /> 7: <x:Property Name="Msl" Type="OutArgument(s:String)" /> 8: <x:Property Name="Ddl" Type="OutArgument(s:String)" /> 9: <x:Property Name="SmoSsdl" Type="OutArgument(ss:SsdlServer)" /> 10: </x:Members> 11: <Sequence> 12: <dbtk:ProgressBarStartActivity /> 13: <dbtk:CsdlToSsdlTemplateActivity SsdlOutput="[Ssdl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToSSDL_TPT.tt" /> 14: <dbtk:CsdlToMslTemplateActivity MslOutput="[Msl]" TemplatePath="$(VSEFTools)\DBGen\CSDLToMSL_TPT.tt" /> 15: <ded:SsdlToDdlActivity ExistingSsdlInput="[ExistingSsdl]" SsdlInput="[Ssdl]" DdlOutput="[Ddl]" /> 16: <dbtk:GenerateAlterSqlActivity DdlInputOutput="[Ddl]" DeployToScript="True" DeployToDatabase="False" /> 17: <dbtk:ProgressBarEndActivity ClosePopup="true" /> 18: </Sequence> 19: </Activity>   So as you can see, this tool enables you to make some pretty heavy customizations to how the SSDL and SQL get generated.  You can get more info and the tool can be downloaded from: http://visualstudiogallery.msdn.microsoft.com/en-us/df3541c3-d833-4b65-b942-989e7ec74c87.  There is a comments section on the site so make sure you let the team know what you like and what you don’t like.  Enjoy!

    Read the article

  • SQL Saturday and Exploring Data Privacy

    - by Johnm
    I have been highly impressed with the growth of the SQL Saturday phenomenon. It seems that an announcement for a new wonderful event finds its way to my inbox on a daily basis. I have had the opportunity to attend the first of the SQL Saturday's for Tampa, Chicago, Louisville and recently my home town of Indianapolis. It is my hope that there will be many more in my future. This past weekend I had the honor of being selected to speak amid a great line up of speakers at SQL Saturday #82 in Indianapolis. My session topic/title was "Exploring Data Privacy". Below is a brief synopsis of my session: Data Privacy in a Nutshell        - Definition of data privacy        - Examples of personally identifiable data        - Examples of Sensitive data Laws and Stuff        - Various examples of laws, regulations and policies that influence the definition of data privacy        - General rules of thumb that encompasses most laws Your Data Footprint        - Who has personal information about you?        - What are you exchanging data privacy for?        - The amazing resilience of data        - The cost of data loss Weapons of Mass Protection       - Data classification       - Extended properties       - Database Object Schemas       - An extraordinarily brief introduction of encryption       - The amazing data professional  <-the most important point of the entire session! The subject of data privacy is one that is quickly making its way to the forefront of the mind of many data professionals. Somewhere out there someone is storing personally identifiable and other sensitive data about you. In some cases it is kept reasonably secure. In other cases it is kept in total exposure without the consideration of its potential of damage to you. Who has access to it and how is it being used? Are we being unnecessarily required to supply sensitive data in exchange for products and services? These are just a few questions on everyone's mind. As data loss events of grand scale hit the headlines in a more frequent succession, the level of frustration and urgency for a solution increases. I assembled this session with the intent to raise awareness of sensitive data and remind us all that we, data professionals, are the ones who have the greatest impact and influence on how sensitive data is regarded and protected. Mahatma Gandhi once said "Be the change you want to see in the world." This is guidance that I keep near to my heart as I approached this topic of data privacy.

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 2

    - by AjarnMark
    In Part 1, I started talking about using Red-Gate’s newest version of SQL Source Control and how I really like it as a viable method to source control your database development.  It looks like this is going to turn into a little series where I will explain how we have done things in the past, and how life is different with SQL Source Control.  I will also explain some of my philosophy and methodology around deployment with these tools.  But for now, let’s talk about some of the good and the bad of the tool itself. More Kudos and Features I mentioned previously how impressed I was with the responsiveness of Red-Gate’s team.  I have been having an ongoing email conversation with Gyorgy Pocsi, and as I have run into problems or requested things behave a little differently, it has not been more than a day or two before a new Build is ready for me to download and test.  Quite impressive! I’m sure much of the requests I put in were already in the plans, so I can’t really take credit for them, but throughout this conversation, Red-Gate has implemented several features that were not in the first Early Access version.  Those include: Honoring the Fortress configuration option to require Work Item (Bug) IDs on check-ins. Adding the check-in comment text as a comment to the Work Item. Adding the list of checked-in files, along with the Fortress links for automatic History and DIFF view Updating the status of a Work Item on check-in (e.g. setting the item to Complete or, in our case “Dev-Complete”) Support for the Fortress 2.0 API, and not just the Vault Pro 5.1 API.  (See later notes regarding support for Fortress 2.0). These were all features that I felt we really needed to have in-place before I could honestly consider converting my team to using SQL Source Control on a regular basis.  Now that I have those, my only excuse is not wanting to switch boats on the team mid-stream.  So when we wrap up our current release in a few weeks, we will make the jump.  In the meantime, I will continue to bang on it to make sure it is stable.  It passed one test for stability when I did a test load of one of our larger database schemas into Fortress with SQL Source Control.  That database has about 150 tables, 200 User-Defined Functions and nearly 900 Stored Procedures.  The initial load to source control went smoothly and took just a brief amount of time. Warnings Remember that this IS still in pre-release stage and while I have not had any problems after that first hiccup I wrote about last time, you still need to treat it with a healthy respect.  As I understand it, the RTM is targeted for February.  There are a couple more features that I hope make it into the final release version, but if not, they’ll probably be coming soon thereafter.  Those are: A Browse feature to let me lookup the Work Item ID instead of having to remember it or look back in my Item details.  This is just a matter of convenience. I normally have my Work Item list open anyway, so I can easily look it up, but hey, why not make it even easier. A multi-line comment area.  The current space for writing check-in comments is a single-line text box.  I would like to have a multi-line space as I sometimes write lengthy commentary.  But I recognize that it is a struggle to get most developers to put in more than the word “fixed” as their comment, so this meets the need of the majority as-is, and it’s not a show-stopper for us. Merge.  SQL Source Control currently does not have a Merge feature.  If two or more people make changes to the same database object, you will get a warning of the conflict and have to choose which one wins (and then manually edit to include the others’ changes).  I think it unlikely you will run into actual conflicts in Stored Procedures and Functions, but you might with Views or Tables.  This will be nice to have, but I’m not losing any sleep over it.  And I have multiple tools at my disposal to do merges manually, so really not a show-stopper for us. Automation has its limits.  As cool as this automation is, it has its limits and there are some changes that you will be better off scripting yourself.  For example, if you are refactoring table definitions, and want to change a column name, you can write that as a quick sp_rename command and preserve the data within that column.  But because this tool is looking just at a before and after picture, it cannot tell that you just renamed a column.  To the tool, it looks like you dropped one column and added another.  This is not a knock against Red-Gate.  All automated scripting tools have this issue, unless the are actively monitoring your every step to know exactly what you are doing.  This means that when you go to Deploy your changes, SQL Compare will script the change as a column drop and add, or will attempt to rebuild the entire table.  Unfortunately, neither of these approaches will preserve the existing data in that column the way an sp_rename will, and so you are better off scripting that change yourself.  Thankfully, SQL Compare will produce warnings about the potential loss of data before it does the actual synchronization and give you a chance to intercept the script and do it yourself. Also, please note that the current official word is that SQL Source Control supports Vault Professional 5.1 and later.  Vault Professional is the new name for what was previously known as Fortress.  (You can read about the name change on SourceGear’s site.)  The last version of Fortress was 2.x, and the API for Fortress 2.x is different from the API for Vault Pro.  At my company, we are currently running Fortress 2.0, with plans to upgrade to Vault Pro early next year.  Gyorgy was able to come up with a work-around for me to be able to use SQL Source Control with Fortress 2.0, even though it is not officially supported.  If you are using Fortress 2.0 and want to use SQL Source Control, be aware that this is not officially supported, but it is working for us, and you can probably get the work-around instructions from Red-Gate if you’re really, really nice to them. Upcoming Topics Some of the other topics I will likely cover in this series over the next few weeks are: How we used to do source control back in the old days (a few weeks ago) before SQL Source Control was available to Vault users What happens when you restore a database that is linked to source control Handling multiple development branches of source code Concurrent Development practices and handling Conflicts Deployment Tips and Best Practices A recap after using the tool for a while

    Read the article

  • LLBLGen Pro v3.0 with Entity Framework v4.0 (12m video)

    - by FransBouma
    Today I recorded a video in which I illustrate some of the database-first functionality available in LLBLGen Pro v3.0. LLBLGen Pro v3.0 also supports model-first functionality, which I hope to illustrate in an upcoming video. LLBLGen Pro v3.0 is currently in beta and is scheduled to RTM some time in May 2010. It supports the following frameworks out of the box, with more scheduled to follow in the coming year: LLBLGen Pro RTL (our own o/r mapper framework), Linq to Sql, NHibernate and Entity Framework (v1 and v4). The video I linked to below illustrates the creation of an entity model for Entity Framework v4, by reverse engineering the SQL Server 2008 example database 'AdventureWorks'. The following topics (among others) are included in the video: Abbreviation support (example: convert 'Qty' into 'Quantity' during name construction) Flexible, framework specific settings Attribute definitions for various elements (so no requirement for buddy-classes or messing with generated code or templates) Retrieval of relational model data from a database Reverse engineering of tables into entities, automatically placed in groups Auto-creation of inheritance hierarchies Refactoring of entity fields into Value Type Definitions (DDD) Mapping a Typed view onto a stored procedure resultset Creation of a Typed list (definition of a query with a projection) on a set of related entities Validation and correction of found inconsistencies and errors Generating code using one of the pre-defined presets Illustration of the code in vs.net 2010 It also gives a good overview of what it takes with LLBLGen Pro v3.0 to start from a new project, point it to a database, get an entity model, perform tweaks and validation and generate code which is ready to run. I am no video recording expert so there's no audio and some mouse movements might be a little too quickly. If that's the case, please pause the video. It's rather big (52MB). Click here to open the HTML page with the video (Flash). Opens in a new window. LLBLGen Pro v3.0 is currently in beta (available for v2.x customers) and scheduled to be released somewhere in May 2010.

    Read the article

  • SSIS code smell – Unused columns in the dataflow

    - by jamiet
    A code smell is defined on Wikipedia as being a “symptom in the source code of a program that possibly indicates a deeper problem”. It’s a term commonly used by our code-writing brethren to describe sub-optimal code but I think the term can be applied equally well to SSIS packages too as I shall now explain One of my pet hates about SSIS development is packages that throw warnings of the form: The output column "ColumnName" (1358) on output "OLE DB Source Output" (1289) and component "OLE_SRC Name" (1279) is not subsequently used in the Data Flow task. Removing this unused output column can increase Data Flow task performance.  The warning is fairly self-explanatory – any column that appears in the data flow but doesn’t get used will throw this warning when the data flow is executed. Its not the negligible performance degradation that they cause that bothers me though, it’s the clutter that they cause in your log file/table. Take a look at the following screenshot if you don’t believe me: There are 231409 such warnings in the system that I took this screenshot from, that is 231409 log records that should not be there. The most infuriating thing about this warning is that it is so easily avoidable; eliminating such columns is a very quick and easy thing to do in the SSIS Designer. The only problem I see is that the warnings don’t occur until you execute the package – it would be preferable for the designer to have an unobtrusive way of informing you of them as well. Anyway, I digress… I consider such warnings to be a code smell because, to me, they’re symptomatic of a lack of due care and attention; a lack of developer discipline if you will. What other code smells can you think of when building SSIS packages? If I get a good list in the comments maybe I’ll compile them into a later blog post. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLBeat Podcast – Episode 5 – Kevin Kline Talks With Me About SQL, Professional Development and Book Writin’

    - by SQLBeat
    I thought I would be a ball of intimated nerves when Kevin gladly agreed to speak with me on the podcast this past weekend.  After all, he is Kevin Kline of SQL in a Nutshell fame! As it turned out,  we had a comfortable and enlightening conversation on Apple MacBooks (is that what they are called?), our beginnings in the indistry, the Deep South, health care intiatives and 286′s. I almost pulled the plug when Kevin started down the Oracle path though, and for a moment he looked at me as if I was serious. As always on this podcast, it is all in good fun. The picture is of Kevin and I ( my shirt is mauve not pink by the way) at the after party for SQL Saturday 151 in Orlando, FL where he also did a Pre-Con to a sold out crowd of enthusiastic DBAs. I know they were enthusiastic even though I was not there because one of the attendees was a friend of mine who went on and on and on about the content, kind of like I am doing here.  So I will just stop that and let you proceed to listen. As always, I hope you enjoy and any feedback on this or future episodes is always welcome. Download the MP3

    Read the article

  • LLBLGen Pro v3.0 has been released!

    - by FransBouma
    After two years of hard work we released v3.0 of LLBLGen Pro today! V3.0 comes with a completely new designer which has been developed from the ground up for .NET 3.5 and higher. Below I'll briefly mention some highlights of this new release: Entity Framework (v1 & v4) support NHibernate support (hbm.xml mappings & FluentNHibernate mappings) Linq to SQL support Allows both Model first and Database first development, or a mixture of both .NET 4.0 support Model views Grouping of project elements Linq-based project search Value Type (DDD) support Multiple Database types in single project XML based project file Integrated template editor Relational Model Data management Flexible attribute declaration for code generation, no more buddy classes needed Fine-grained project validation Update / Create DDL SQL scripts Fast Text-DSL based Quick mode Powerful text-DSL based Quick Model functionality Per target framework extensible settings framework much much more... Of course we still support our own O/R mapper framework: LLBLGen Pro v3.0 Runtime framework as well, which was updated with some minor features and was upgraded to use the DbProviderFactory system. Please watch the videos of the designer (more to come very soon!) to see some aspects of the new designer in action. The full version comes with Algorithmia in sourcecode as well. Algorithmia is an algorithm library written for .NET 3.5 which powers the heart of the designer with a fine-grained undo/redo command framework, graph classes and much more. I'd like to thank all beta-testers, our support team and others who have helped us with this massive release. :)

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Calculating estimated data loss with Always on

    - by blakmk
    Ever wondered how calculate estimated data loss (time) for always on. The metric in the always on dashboard shows the metric quite nicely but there does seem to be a lack of documentation about where the metrics ---come from. Heres a script that calculates the data loss ( lag ) so you can set up alerts based on your DR SLA's:       WITH DR_CTE ( replica_server_name, database_name, last_commit_time) AS                 (                                 select ar.replica_server_name, database_name, rs.last_commit_time                                 from master.sys.dm_hadr_database_replica_states  rs                                 inner join master.sys.availability_replicas ar on rs.replica_id = ar.replica_id                                 inner join sys.dm_hadr_database_replica_cluster_states dcs on dcs.group_database_id = rs.group_database_id and rs.replica_id = dcs.replica_id                                 where replica_server_name != @@servername                 ) select ar.replica_server_name, dcs.database_name, rs.last_commit_time, DR_CTE.last_commit_time 'DR_commit_time', datediff(ss,  DR_CTE.last_commit_time, rs.last_commit_time) 'lag_in_seconds' from master.sys.dm_hadr_database_replica_states  rs inner join master.sys.availability_replicas ar on rs.replica_id = ar.replica_id inner join sys.dm_hadr_database_replica_cluster_states dcs on dcs.group_database_id = rs.group_database_id and rs.replica_id = dcs.replica_id inner join DR_CTE on DR_CTE.database_name = dcs.database_name where ar.replica_server_name = @@servername order by lag_in_seconds desc

    Read the article

  • Dynamic Unpivot : SSIS Nugget

    - by jamiet
    A question on the SSIS forum earlier today asked: I need to dynamically unpivot some set of columns in my source file. Every month there is one new column and its set of Values. I want to unpivot it without editing my SSIS packages that is deployed Let’s be clear about what we mean by Unpivot. It is a normalisation technique that basically converts columns into rows. By way of example it converts something like this: AccountCode Jan Feb Mar AC1 100.00 150.00 125.00 AC2 45.00 75.50 90.00 into something like this: AccountCode Month Amount AC1 Jan 100.00 AC1 Feb 150.00 AC1 Mar 125.00 AC2 Jan 45.00 AC2 Feb 75.50 AC2 Mar 90.00 The Unpivot transformation in SSIS is perfectly capable of carrying out the operation defined in this example however in the case outlined in the aforementioned forum thread the problem was a little bit different. I interpreted it to mean that the number of columns could change and in that scenario the Unpivot transformation (and indeed the SSIS dataflow in general) is rendered useless because it expects that the number of columns will not change from what is specified at design-time. There is a workaround however. Assuming all of the columns that CAN exist will appear at the end of the rows, we can (1) import all of the columns in the file as just a single column, (2) use a script component to loop over all the values in that “column” and (3) output each one as a column all of its own. Let’s go over that in a bit more detail.   I’ve prepared a data file that shows some data that we want to unpivot which shows some customers and their mythical shopping lists (it has column names in the first row): We use a Flat File Connection Manager to specify the format of our data file to SSIS: and a Flat File Source Adapter to put it into the dataflow (no need a for a screenshot of that one – its very basic). Notice that the values that we want to unpivot all exist in a column called [Groceries]. Now onto the script component where the real work goes on, although the code is pretty simple: Here I show a screenshot of this executing along with some data viewers. As you can see we have successfully pulled out all of the values into a row all of their own thus accomplishing the Dynamic Unpivot that the forum poster was after. If you want to run the demo for yourself then I have uploaded the demo package and source file up to my SkyDrive: http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100529/Dynamic%20Unpivot.zip Simply extract the two files into a folder, make sure the Connection Manager is pointing to the file, and execute! Hope this is useful. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • Kill your temp tables using keyboard shortcuts : SSMS

    - by jamiet
    Here’s a nifty little SSMS trick that my colleague Tom Hunter educated me on the other day and I thought it was worth sharing. If you’re a keyboard shortcut junkie then you’ll love it. How often when working with code in SSMS that contains temp tables do you see the following message: Msg 2714, Level 16, State 6, Line 78 There is already an object named '#table' in the database. Quite often I would imagine, it happens to me all the time! Usually I write a bit of code at the top of the query window that goes and drops the table if it exists but there’s a much easier way of dealing with it. Remember that temp tables disappear as soon as your sessions ends hence wouldn’t it be nice if there were a quick way of recycling (i.e. stopping and restarting) your session? Well turns out there is and all it takes is a sequence of 4 keystrokes: Bring up the context menu using that mythically-named button that usually sits 3 to the right of the space bar ‘C’ for “Connection” ‘H’ for “Change Connection…” ‘Enter’ to select the same connection you had open last time (screenshots below) Once you’ve done it a few times you’ll probably have the whole sequence down to less than a second. Such a simple little trick, I’m annoyed with myself for it not occurring to me before! The only caveat is that you’ll need a “USE <database>” directive at the top of your query window but I don’t think that’s much of a bind! That is all other than to say if you like little SSMS titbits like this then Lee Everest’s blog is a good one to keep an eye on! @jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to list column headers of a SQL Server table using sp_help perhaps?

    - by Hamish Grubijan
    Hi, I have a few tables with 70-80 columns in them. I would like to populate them with somewhat random data, unless I will not be able to do so due to key violation, etc. The first step would be simply to get the list of all headers. There seem to be two ways: A) Run select * from table_of_interest; in MSFT SQL Server Management Studio 2008. Now, right-click the result and click "Copy With headers". However, I get zero rows back, and when I try to copy nothing + headers, I get: TITLE: Microsoft SQL Server Management Studio ------------------------------ Value cannot be null. Parameter name: data (System.Windows.Forms) ------------------------------ BUTTONS: OK ------------------------------ This looks like a bug ... anyhow ... there is another way. B) I can run sp_help table_of_interest;. However, I end up getting too much back. I get 7 different tables back but I am only interested in the second one. The columns of the second table are: Column_name | Type | Computed | Length | Prec | Scale | Nullable | TrimTrailingBlanks | FixedLenNullInSource | Collation I might be interested in just a Column_name and Type, but maybe other columns. So ... since sp_help probably runs a bunch of queries ... how do I get under the hood? How can I run the second query AND filter down the number of columns that I am interested in? Many Thanks!

    Read the article

  • How to: Check which table is the biggest, in SQL Server

    - by AngelEyes
    The company I work with had it's DB double its size lately, so I needed to find out which tables were the biggest. I found this on the web, and decided it's worth remembering! Taken from http://www.sqlteam.com/article/finding-the-biggest-tables-in-a-database, the code is from http://www.sqlteam.com/downloads/BigTables.sql   /************************************************************************************** * *  BigTables.sql *  Bill Graziano (SQLTeam.com) *  [email protected] *  v1.1 * **************************************************************************************/ DECLARE @id INT DECLARE @type CHARACTER(2) DECLARE @pages INT DECLARE @dbname SYSNAME DECLARE @dbsize DEC(15, 0) DECLARE @bytesperpage DEC(15, 0) DECLARE @pagesperMB DEC(15, 0) CREATE TABLE #spt_space   (      objid    INT NULL,      ROWS     INT NULL,      reserved DEC(15) NULL,      data     DEC(15) NULL,      indexp   DEC(15) NULL,      unused   DEC(15) NULL   ) SET nocount ON -- Create a cursor to loop through the user tables DECLARE c_tables CURSOR FOR   SELECT id   FROM   sysobjects   WHERE  xtype = 'U' OPEN c_tables FETCH NEXT FROM c_tables INTO @id WHILE @@FETCH_STATUS = 0   BEGIN       /* Code from sp_spaceused */       INSERT INTO #spt_space                   (objid,                    reserved)       SELECT objid = @id,              SUM(reserved)       FROM   sysindexes       WHERE  indid IN ( 0, 1, 255 )              AND id = @id       SELECT @pages = SUM(dpages)       FROM   sysindexes       WHERE  indid < 2              AND id = @id       SELECT @pages = @pages + Isnull(SUM(used), 0)       FROM   sysindexes       WHERE  indid = 255              AND id = @id       UPDATE #spt_space       SET    data = @pages       WHERE  objid = @id       /* index: sum(used) where indid in (0, 1, 255) - data */       UPDATE #spt_space       SET    indexp = (SELECT SUM(used)                        FROM   sysindexes                        WHERE  indid IN ( 0, 1, 255 )                               AND id = @id) - data       WHERE  objid = @id       /* unused: sum(reserved) - sum(used) where indid in (0, 1, 255) */       UPDATE #spt_space       SET    unused = reserved - (SELECT SUM(used)                                   FROM   sysindexes                                   WHERE  indid IN ( 0, 1, 255 )                                          AND id = @id)       WHERE  objid = @id       UPDATE #spt_space       SET    ROWS = i.ROWS       FROM   sysindexes i       WHERE  i.indid < 2              AND i.id = @id              AND objid = @id       FETCH NEXT FROM c_tables INTO @id   END SELECT TOP 25 table_name = (SELECT LEFT(name, 25)                             FROM   sysobjects                             WHERE  id = objid),               ROWS = CONVERT(CHAR(11), ROWS),               reserved_kb = Ltrim(Str(reserved * d.low / 1024., 15, 0) + ' ' + 'KB'),               data_kb = Ltrim(Str(data * d.low / 1024., 15, 0) + ' ' + 'KB'),               index_size_kb = Ltrim(Str(indexp * d.low / 1024., 15, 0) + ' ' + 'KB'),               unused_kb = Ltrim(Str(unused * d.low / 1024., 15, 0) + ' ' + 'KB') FROM   #spt_space,        MASTER.dbo.spt_values d WHERE  d.NUMBER = 1        AND d.TYPE = 'E' ORDER  BY reserved DESC DROP TABLE #spt_space CLOSE c_tables DEALLOCATE c_tables

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How I use schemas.

    - by Alexander Kuznetsov
    I use schemas to simplify granting permissions. For tables and views, I have three schemas: Data, the actual data my customers need. Can only be modified via sprocs. Staging, only visible to data loaders and devs. Full privileges on INSERT?UPDATE/DELETE for those who see it. Config, the configuration data used in loads, only visible to data loaders and devs. Can only be modified via sprocs. For sprocs/UDFs I have the following schemas: Readers Writers ETL ConfigReaders ConfigWriters Also I have dbo...(read more)

    Read the article

  • FileNameColumnName property, Flat File Source Adapter : SSIS Nugget

    - by jamiet
    I saw a question on MSDN’s SSIS forum the other day that went something like this: I’m loading data into a table from a flat file but I want to be able to store the name of that file as well. Is there a way of doing that? I don’t want to come across as disrespecting those who took the time to reply but there was a few answers along the lines of “loop over the files using a For Each, store the file name in a variable yadda yadda yadda” when in fact there is a much much simpler way of accomplishing this; it just happens to be a little hidden away as I shall now explain! The Flat File Source Adapter has a property called FileNameColumnName which for some reason it isn’t exposed through the Flat File Source editor, it is however exposed via the Advanced Properties: You’ll see in the screenshot above that I have set FileNameColumnName=“Filename” (it doesn’t matter what name you use, anything except a non-zero string will work). What this will do is create a new column in our dataflow called “Filename” that contains, unsurprisingly, the name of the file from which the row was sourced. All very simple. This is particularly useful if you are extracting data from multiple files using the MultiFlatFile Connection Manager as it allows you to differentiate between data from each of the files as you can see in the following screenshot: So there you have it, the FileNameColumnName property; a little known secret of SSIS. I hope it proves to be useful to someone out there. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Windows and SQL Azure Best Practices: Affinity Groups

    - by BuckWoody
    When you create a Windows Azure application, you’ll pick a subscription to put it under. This is a billing container - underneath that, you’ll deploy a Hosted Service. That holds the Web and Worker Roles that you’ll deploy for your applications. along side that, you use the Storage Account to create storage for the application. (In some cases, you might choose to use only storage or Roles - the info here applies anyway) As you are setting up your environment, you’re asked to pick a “region” where your application will run. If you choose a Region, you’ll be asked where to put the Roles. You’re given choices like Asia, North America and so on. This is where the hardware that physically runs your code lives. We have lots of fault domains, power considerations and so on to keep that set of datacenters running, but keep in mind that this is where the application lives. You also get this selection for Storage Accounts. When you make new storage, it’s a best practice to put it where your computing is. This makes the shortest path from the code to the data, and then back out to the user. One of the selections for the location is “Anywhere U.S.”. This selection might be interpreted to mean that we will bias towards keeping the data and the code together, but that may not be the case. There is a specific abstraction we created for just that purpose: Affinity Groups. An Affinity Group is simply a name you can use to tie together resources. You can do this in two places - when you’re creating the Hosted Service (shown above) and on it’s own tree item on the left, called “Affinity Groups”. When you select either of those actions, You’re presented with a dialog box that allows you to specify a name, and then the Region that  names ties the resources to. Now you can select that Affinity Group just as if it were a Region, and your code and data will stay together. That helps with keeping the performance high. Official Documentation: http://msdn.microsoft.com/en-us/library/windowsazure/hh531560.aspx

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >