Search Results

Search found 679 results on 28 pages for 'consistency dbcc'.

Page 6/28 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Creating country specific twitter/facebook accounts

    - by user359650
    I see many companies that have an international presence trying to localize their social media presence by creating country or language specific accounts. However some seemed to have done so without following a consistent pattern, one example being the World Wildlife Fund when you look at their Twitter accounts: World_Wildlife : verified account with 200K followers WWF : main account with 800K followers www_uk : lower case with underscore between WWF and country indicator WWFCanada : upper case with country indicator attached to WWF ... I am planning to build a website which hopefully will grow global and would like to avoid this sort of inconsistencies. Also, I was comparing what Twitter and Facebook allow in their username and found out that they don't allow the same characters to be used (e.g. for instance that the former doesn't allow . whereas the latter does) making difficult to ensure consistency across social networks. Hence my questions: Are there known naming schemes for creating localized Twitter and Facebook accounts while maintaining a certain consistency between them (best effort)? Are there any researches out there that have proven whether some schemes were better than others in terms of readability and/or SEO?

    Read the article

  • Recommendations for stable, reliable flash drives

    - by Josh Kelley
    We're looking to purchase some flash drives for use in some embedded devices. Most of our requirements aren't too different the generic "good, fast" flash drive: reliability is very important, speed is good, and so that the drive will fit, the case shouldn't be too large (so no OCZ Throttles). Consistency is also a major priority; we'd like to be able to buy more or less the same product a year or two from now without having to worry about the manufacturer swapping drive components with less reliable or slower parts. (We've been burned already by our previous manufacturer doing this.) Any recommendations, especially regarding consistency? I can read Ars Technica to get an overview of current models, but what are consistently good models?

    Read the article

  • Pushing complete notifications to client

    - by ton.yeung
    So with cqrs, we accept that consistency is eventual. However, that doesn't mean that the user has to continually poll, or that eventual means an update has to take more then 500ms to sync. For the sake of UX, we want to at least give the illusion of consistency, or if not possible, be as transparent as possible. With that in mind, I have this setup: angularjs web client, consumes webapi restful services, sends commands to nservicebus command handlers, saves to neventstore, dispatches events to nservicebus event handlers, sends message to signalr hub, sends notifications to angularjs web client so with that setup, theoretically some initiates a request the server validates the request sends out the necessary commands In the mean time the client gets a 200 response updates the view: working on it gets message sometime later: done, here's the updated data Here's where things get interesting, each command could spawn multiple events. Not sure if this is a serious no, no, or not, but that's how it is currently. For example, a new customer spawns CustomerIDCreated, CustomerNameUpdated, CustomerAddressUpdated, etc... Which event handler needs to notify the client? Should all of them in a progress bar style update?

    Read the article

  • When creating a GUI wizard, should all pages/tabs be of the same size? [closed]

    - by Job
    I understand that some libraries would force me to, but my question is general. If I have a set of buttons at the bottom: Back, Next, Cancel?, (other?), then should their location ever change? If the answer is no, then what do I do about pages with little content? Do I stretch things? Place them in the lone upper left corner? According to Steve Krug, it does not make sense to add anything to GUI that does not need to be there. I understand that there are different approaches to wizards - some have tabs, others do not. Some tabs are lined horizontally at the top; others - vertically on the left. Some do not show pages/tabs, and are simply sequences of dialogs. This is probably a must when the wizard is "non-linear", e.g. some earlier choices can result in branching. Either way the problem is the same - sacrifice on the consistency of the "big picture" (outline of the page/tab + location of buttons), or the consistency of details (some tabs might be somewhat packed; others having very little content). A third choice, I suppose is putting extra effort in the content in order to make sure that organizing the content such that it is more or less evenly distributed from page to page. However, this can be difficult to do (say, when the very first tab contains only a choice of three things, and then branches off from there; there are probably other examples), and hard to maintain this balance if any of the content changes later. Can you recommend a good approach? A link to a relevant good blog post or a chapter of a book is also welcome. Let me know if you have questions.

    Read the article

  • Should developers do their own software releases (if there is a prod support team in place)?

    - by leora
    I know there are going to always be differences depending on the particular size, staff etc, but i wanted to get feedback in general around: In an environment where you have a production support team doing first line support and release management, is it better to simply have developers manage their own releases instead? In this case, its internal software at an insurance company but the question should be valid at any company, size, etc I think. Currently, we have our production team do releases but there is an argument that its inefficient and that if you allowed developers the ability to do it, they will focus more on making it simple and efficient and avoid basically passing on scripts, etc to run to another team. The counter argument is that if you don't have a check and balance, you could get a software team (or an individual) that doesn't a very hacky job about getting their software out there (making on the fly changes, not documenting the process, etc) and that by forcing the prod support team to do the actual release, it enforces consistency and proper checks and balances. I know this is not a black or white issue but I wanted to see what folks thought on this so the discipline and consistency is there but without the feeling that an inefficient process is in place.

    Read the article

  • What does SQL Server trace flag 253 do?

    - by kamens
    In another question I was trying to research how to control SQL Server's query plan caches: http://stackoverflow.com/questions/2593749/is-there-an-equivalent-of-optionrecompile-or-with-recompile-for-an-entire-c ...and I found trace flag 253 via this article: http://www.sqlservercentral.com/Forums/Topic837613-146-1.aspx The article is correct, if I run DBCC TRACEON(253) and then a number of queries, I can manually check the query plan cache and see that plans have not been inserted. If I run DBCC TRACEOFF(253), query plans are cached as normal. So my question is...what else does this flag do? Does anybody know the official story?

    Read the article

  • Why better isolation level means better performance in SQL Server

    - by Oleg Zhylin
    When measuring performance on my query I came up with a dependency between isolation level and elapsed time that was surprising to me READUNCOMMITTED - 409024 READCOMMITTED - 368021 REPEATABLEREAD - 358019 SERIALIZABLE - 348019 Left column is table hint, and the right column is elapsed time in microseconds (sys.dm_exec_query_stats.total_elapsed_time). Why better isolation level gives better performance? This is a development machine and no concurrency whatsoever happens. I would expect READUNCOMMITTED to be the fasted due to less locking overhead. Update: I did measure this with DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE issued and Profiler confirms there're no cache hits happening. Update2: The query in question is an OLAP one and we need to run it as fast as possible. Closing the production server from outside world to get the computation done is not out of question if this gives performance benefits.

    Read the article

  • The Road to Professional Database Development: Database Normalization

    Not only is the process of normalization valuable for increasing data quality and simplifying the process of modifying data, but it actually makes the database perform much faster. To prove the point, Peter Larsson takes a large unnormalised database and subjects it to successive stages of normalisation. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Geek City: Clearing Plans for a Single Database

    - by Kalen Delaney
    I know Friday afternoon isn't the best time for blogging, as everyone is going home now, and by Monday morning, this post will be old news. But I'm not shutting down just yet, and a something came up this week that I just realized not everybody knew about, so I decided to blog it. Many (or most?) of you are aware that you can clear all cached plans using DBCC FREEPROCCACHE. In addition, there are certain configuration options, for which changing their values will cause all plans in cache to be removed....(read more)

    Read the article

  • Geek City: Clearing Plans for a Single Database

    - by Kalen Delaney
    I know Friday afternoon isn't the best time for blogging, as everyone is going home now, and by Monday morning, this post will be old news. But I'm not shutting down just yet, and a something came up this week that I just realized not everybody knew about, so I decided to blog it. Many (or most?) of you are aware that you can clear all cached plans using DBCC FREEPROCCACHE. In addition, there are certain configuration options, for which changing their values will cause all plans in cache to be removed....(read more)

    Read the article

  • SQL Server-Determine which query is taking a long time to complete

    - by Neil Smith
    Cool little trick to determine which sql query which is taking a long time to execute, first while offending query is running from another machine do EXEC sp_who2 Locate the SPID responsible via Login, DBName and ProgramName columns, then do DBCC INPUTBUFFER (<SPID>) The offending query will be in the EventInfo column.  This is a great little time saver for me, before I found out about this I used to split my concatenated query script in to multiple sql files until I located the problem query

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • SQLServerCentral.com Users Survey 2012

    We want to make sure we're covering the things that are relevant to you, so we're asking for some feedback on what you use on SSC, where we need to improve, and what you'd like to see. It should only take a few minutes, and three randomly-selected people will win a $100 Amazon gift card for their efforts. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Implementing Foreach Looping Logic in SSIS

    With SSIS, it is possible to implement looping logic into SSIS's control flow in order to define a repeating workflow in a package for each member of a collection of objects. Rob Sheldon explains how to use this valuable feature of SSIS. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • SQL in the City - Chicago 2012

    A free day of training in Chicago on Oct 5, 2012. Join Grant Fritchey, Steve Jones and more to discuss, debate, ask questions, and learn about how to better run your organizations SQL Servers. Are you sure you can restore your backups? Run full restore + DBCC CHECKDB quickly and easily with SQL Backup Pro's new automated verification. Check for corruption and prepare for when disaster strikes. Try it now.

    Read the article

  • SSIS Basics: Introducing Variables

    In the third of her SSIS Basics articles, Annette Allen shows you how to use Variables in your SSIS Packages, and explains the functions of the system-defined variables. Are you sure you can restore your backups? Run full restore + DBCC CHECKDB quickly and easily with SQL Backup Pro's new automated verification. Check for corruption and prepare for when disaster strikes. Try it now.

    Read the article

  • Hierarchies in SQL, Part II, the Sequel

    In a followup to his first article on Hierarchies, Gus Gwynn takes a look at the performance of a few different methods of querying a hierarchy. Learn how the HierarchyID stacks up. Are you sure you can restore your backups? Run full restore + DBCC CHECKDB quickly and easily with SQL Backup Pro's new automated verification. Check for corruption and prepare for when disaster strikes. Try it now.

    Read the article

  • SQL in the City - San Francisco 2012

    The city by the bay welcomes Steve Jones, Grant Fritchey and more for a day of debate, discussion and learning about SQL Server. It's free. Just register and join us. Are you sure you can restore your backups? Run full restore + DBCC CHECKDB quickly and easily with SQL Backup Pro's new automated verification. Check for corruption and prepare for when disaster strikes. Try it now.

    Read the article

  • SQL Server 2012 Integration Services - Package and Project Parameters

    In SQL Server 2012, Microsoft introduced SQL Server Data Tools to accommodate the dynamic nature of SSIS constructs in the form of package and project parameters. This approach lets you combine multi-package projects into a single unit, eliminating the possibility of breaking dependencies between parent and child packages during subsequent deployments. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • SQL Relay 2012

    This May brings SQL Server experts to your locality; full day seminars in Edinburgh, Manchester, Birmingham, Bristol and London. Overview sessions from Microsoft in the morning, Deep Dive sessions in the afternoon from SQL Server MVP's and evening community events. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • SQL Server 2012 Integration Services - Package and Project Configurations

    Marcin Policht examines SSIS 2012 package and project configurations, which offer different ways of modifying values of variables and parameters without having to directly edit content of the packages and projects of which they are a part. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage

    Read the article

  • Database Continuous Integration 101

    We talk a lot about continuous integration here on the Atlassian Dev Tools blog, and many readers are bonafide CI gurus. Now that you are integrating your application code, test code, config files and deploy scripts, are you ready to take it to the next level? An increasing number of engineering shops are starting to bring the continuous integration discipline into their database development. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Ten Things I Wish I'd Known When I Started Using tSQLt and SQL Test

    The tSQLt framework is a great way of writing unit tests in the same language as the one being tested, but there are some 'Gotchas' that can catch you out. Dave Green lists a few tips he wished he'd read beforehand. Are you sure you can restore your backups? Run full restore + DBCC CHECKDB quickly and easily with SQL Backup Pro's new automated verification. Check for corruption and prepare for when disaster strikes. Try it now.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >