Search Results

Search found 10442 results on 418 pages for 'my blog'.

Page 110/418 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Powershell, SMO and Database Files

    - by dbaduck
    In response to some questions about renaming a physical file for a database, I have 2 versions of Powershell scripts that do this for you, including taking the database offline and then online to make the physical change match the meta-data. First, there is an article about this at http://msdn.microsoft.com/en-us/library/ms345483.aspx . This explains that you start by setting the database offline, then alter the database and modify the filename then set it back online. This particular article does...(read more)

    Read the article

  • Slow in the Application but Fast in SQL Server Management Studio - from Erland

    - by Greg Low
    Our MVP buddy Erland Sommarskog doesn't post articles that often but when he does, you should read them. His latest post is here: http://www.sommarskog.se/query-plan-mysteries.html It talks about why a query might be slow when sent from an application but fast when you execute it in SSMS. But it covers way more than that. There is a great deal of good info on how queries are executed and query plans generated. Highly recommended!...(read more)

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • SQL Down Under Show 51 - Guest Conor Cunningham - Now online

    - by Greg Low
    Late last night I got to record an interview with Conor Cunningham.Most people that know Conor have come across him as the product team wizard that knows so much about query processing and optimization in SQL Server. Conor is currently spending quite a lot of time working on Windows Azure SQL Database, which we used to know as SQL Azure. I'm still trying to think of a good way to say "WASD". I suppose I'll pronounce it like "wassid". Windows Azure SQL Reporting is easier. I think it just needs to be pronounced like "wazza" with a very Australian accent.In the show, we've spent time on the current state of the platform, on dispelling a number of common misbeliefs about the product, and hopefully on answering most of the common questions that seem to get asked about it. We then ventured into Federations, Data Sync, and Reporting.You'll find the show (and previous shows) here: http://www.sqldownunder.com/Resources/Podcast.aspxEnjoy!PS: For those that like transcripts, we've got the process for producing them much improved now and the transcript should also be up within a few days.

    Read the article

  • Three Buckets of Knowledge

    - by BuckWoody
    As I learn more and more about SQL Server every day, I divide up my information into three “buckets”: Concepts In the first bucket are the general concepts about the topic. What is it? What does it do (or sometimes, what is is supposed to do?) How does one operation flow to another? For this information I use books, magazine articles and believe it or not – Wikipedia. I don’t always trust that last source, but I do use it to see how others lay out their thoughts around a concept. I really like graphical charts that show me the process flow if I can get it, and this is an ideal place for a good presentation. In fact, this may be the only real use for a presentation – I’ll explain what I mean in a moment. Reference The references for a topic include things like Transact-SQL (T-SQL) syntax, or the screen layout on a panel, things like that. Think Dictionary. The only reference I trust for this information is Books Online – presentations are fine, but we’re talking about a dictionary. Ever go to a movie that just reads through a dictionary? Me neither. But I have gone to presentations where people try to include tons of reference materials in their slides. Even if you give me the presentation material later, it’s not really a searchable, readable medium. How To A how-to for me is an example, or even better, a tutorial about an example. Whatever it is shows me a practical use for the concepts and of course involves the syntax. The important thing here is that you need to be able to separate out the example the person is showing you from the stuff you need to know. I can’t tell you how many times folks have told me, “well, sure, if yours is red then that works. But mine is blue.” And I have to explain, “then use “blue” for the search word here.” You get the idea. No one will do your work for you – the examples are meant as a teaching tool only. I accept that, learn what I can, and then run off to create my own thing. You might think a How To works well in a presentation, and it does, for the most part. For a complex example or tutorial, I still prefer the printed word (electronic if possible) so that I can go over the example multiple times, skip around and so on.   The order here isn’t actually that important. Most of the time I start with a concept, look at an example, and then read the reference material. But sometimes I look up an example, read a little of concepts and then check the reference. The only primary thing I try to enforce is to read something from each of them. It’s dangerous to base your work on any single example, reference or concept.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Auditing database source code changes

    - by John Paul Cook
    Auditing changes to database source code can be easily implemented with a database trigger. Here’s a simple implementation of stored procedure auditing using an audit table and a database trigger. It assumes that a schema named Audit already exists. CREATE TABLE Audit . AuditStoredProcedures ( DatabaseName sysname , ObjectName sysname , LoginName sysname , ChangeDate datetime , EventType sysname , EventDataXml xml ); Notice the EventDataXml column. Using an nvarchar column to store the source text...(read more)

    Read the article

  • SQL Community – stronger than ever

    - by Rob Farley
    I posted a few hours ago about a reflection of the Summit, but I wanted to write another one for this month’s T-SQL Tuesday, hosted by Chris Yates. In January of this year, Adam Jorgensen and I joked around in a video that was used for the SQL Server 2012 launch. We were asked about SQLFamily, and we said how we were like brothers – how we could drive each other crazy (the look he gave me as I patted his stomach was priceless), but that we’d still look out for each other, just like in a real family. And this is really true. Last week at the PASS Summit, there was a lot going on. I was busy as always, as were many others. People told me their good news, their awful news, and some whinged to me about other people who were driving them crazy. But throughout this, people in the SQL Server community genuinely want the best for each other. I’m sure there are exceptions, but I don’t see much of this. Australians aren’t big on cheering for each other. Neither are the English. I think we see it as an American thing. It could be easy for me to consider that the SQL Community that I see at the PASS Summit is mainly there because it’s a primarily American organisation. But when you speak to people like sponsors, or people involved in several types of communities, you quickly hear that it’s not just about that – that PASS has something special. It goes beyond cheering, it’s a strong desire to see each other succeed. I see MVPs feel disappointed for those people who don’t get awarded. I see Summit speakers concerned for those who missed out on the chance to speak. I see chapter leaders excited about the opportunity to help other chapters. And throughout, I see a gentleness and love for people that you rarely see outside the church (and sadly, many churches don’t have it either). Chris points out that the M-W dictionary defined community as “a unified body of individuals”, and I feel like this is true of the SQL Server community. It goes deeper though. It’s not just unity – and we’re most definitely different to each other – it’s more than that. We all want to see each other grow. We all want to pull ourselves up, to serve each other, and to grow PASS into something more than it is today. In that other post of mine I wrote a bit about Paul White’s experience at his first Summit. His missus wrote to me on Facebook saying that she welled up over it. But that emotion was nothing about what I wrote – it was about the reaction that the SQL Community had had to Paul. Be proud of it, my SQL brothers and sisters, and never lose it.

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-11

    - by Bob Rhubart
    Whiteboards, not red carpets. OTN Architect Day Los Angeles. Oct 25. Free event. Yes, it's TinselTown, but the stars at this event are experts in the use of Oracle technologies in today's architectures. This free event includes a full slate of technical sessions and peer interaction covering cloud computing, SOA, and engineered systems—and lunch is on us. Register now. Thursday October 25, 2012, 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048 JDeveloper extensions where? | Peter Paul van de Beek "Where does the downloaded stuff go after you installed JDeveloper extensions, like SOA Composite Editor, Oracle BPM Studio, or AIA Service Constructor?" Peter Paul van de Beek has the answer. Using Apache Derby Database with WebLogic (the express way) | Frank Munz Another technical how-to video from Dr. Frank Munz. Compensation Hello World | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen's post addresses several question that came up during the "Effective Fault Handling in SOA Suite 11g" session that he and fellow Oracle ACE Guido Schmutz presented at Oracle OpenWorld. Oracle Fusion Middleware Security: OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes," the blog explains, "contain the articles we've written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Maximum Availability Whitepaper for IDM 11gR2 | Oracle Fusion Middleware Security The Oracle Fusion Middelware A-Team shares an overview of and a link to a new white paper: "Identity Management 11.1.2 Enterprise Deployment Blueprint." Thought for the Day "The trouble with the world is that the stupid are sure and the intelligent are full of doubt." — Bertrand Russell (May 18, 1872 – February 2, 1970) Source: SoftwareQuotes.com

    Read the article

  • C++ Accelerated Massive Parallelism

    - by Daniel Moth
    At AMD's Fusion conference Herb Sutter announced in his keynote session a technology that our team has been working on that we call C++ Accelerated Massive Parallelism (C++ AMP) and during the keynote I showed a brief demo of an app built with our technology. After the keynote, I go deeper into the technology in my breakout session. If you read both those abstracts, you'll get some information about what C++ AMP is, without being too explicit since we published the abstracts before the technology was announced. You can find the official online announcement at Soma's blog post. Here, I just wanted to capture the key points about C++ AMP that can serve as an introduction and an FAQ. So, in no particular order… C++ AMP lowers the barrier to entry for heterogeneous hardware programmability and brings performance to the mainstream, without sacrificing developer productivity or solution portability. is designed not only to help you address today's massively parallel hardware (i.e. GPUs and APUs), but it also future proofs your code investments with a forward looking design. is part of Visual C++. You don't need to use a different compiler or learn different syntax. is modern C++. Not C or some other derivative. is integrated and supported fully in Visual Studio vNext. Editing, building, debugging, profiling and all the other goodness of Visual Studio work well with C++ AMP. provides an STL-like library as part of the existing concurrency namespace and delivered in the new amp.h header file. makes it extremely easy to work with large multi-dimensional data on heterogeneous hardware; in a manner that exposes parallelization. introduces only one core C++ language extension. builds on DirectX (and DirectCompute in particular) which offers a great hardware abstraction layer that is ubiquitous and reliable. The architecture is such, that this point can be thought of as an implementation detail that does not surface to the API layer. Stay tuned on my blog for more over the coming months where I will switch from just talking about C++ AMP to showing you how to use the API with code examples… Comments about this post welcome at the original blog.

    Read the article

  • Book Review: Fast Track to MDX

    - by Greg Low
    Another book that I re-read while travelling last week was Fast Track to MDX . I still think that it's the best book that I've seen for introducing the core concepts of MDX. SolidQ colleague Mark Whitehorn, along with Mosha Pasumansky and Robert Zare do an amazing job of building MDX knowledge throughout the book. I had dinner with Mark in London a few years back and I was pestering him to update this book. The biggest limitation of the book is that it was written for SQL Server 2000 Analysis Services,...(read more)

    Read the article

  • SQL Server v.Next (Denali) : Breaking change to fn_virtualfilestats

    - by AaronBertrand
    Yesterday I posted a general warning about changes to Denali that will potentially break your existing code base, with a strong suggestion to grab the summer CTP as soon as it is available and start testing. I posted an example of a breaking change that will not be documented since it affects a commonly-used but undocumented DBCC command (DBCC LOGINFO), and also mentioned a couple of other changes in passing (). Today it occurred to me that it may be more useful if, when I come across a potential...(read more)

    Read the article

  • When the obvious answer is obviously wrong

    - by John Paul Cook
    This post is about how simple math in T-SQL can produce undesirable results, but first we begin with a math quiz. Answer the following as quickly as possible: You just read pages 100-300 of a book. How many pages did you read? QUICKLY NOW! For those of you who answered 200 pages, I have a new question: Which page did you not read? There were 201 pages to read. If you read 200 pages, you skipped a page! What your answer be if I asked you how many pages did you read if you read pages 1-3? Three pages!...(read more)

    Read the article

  • MDW Reports–New Source Code ZIP File Available

    - by billramo
    In my MDW Reports series, I attached V1 of the RDL files in my post - May the source be with you! MDW Report Series Part 6–The Final Edition. Since that post, Rachna Agarwal from MSIT in India updated the RDL files that are ready to go in a single ZIP. The reports assume that they will ne uploaded to the Report Manager’s root folder and use a shared data source named MDW. The reports also integrate with the new Query Hash Statistics reports. You can download them from my SQLBlog.com download site.

    Read the article

  • What's the best platform for blogging about coding ?

    - by timday
    I'm toying with starting an occasional blog for posting odd bits of coding related stuff (mainly C++, probably). Are there any platforms which can be recommended as providing exceptionally good support (e.g syntax highlighting) for posting snippets of code ? (Or any to avoid because posting mono-spaced font blocks of text is a pain). Outcome: I accepted Josh K's answer because what I actually ended up doing was realizing I was more interested in articles than a blog style, getting back into LaTeX (after almost 20 years away from it), using the "listings" package for code, and pushing the HTML/PDF results to my ISP's static-hosting pages. (HTML generated using tex4ht). Kudos to the answers mentioning Wordpress, Tumblr and Jekyll; I spent some time looking into all of them.

    Read the article

  • What makes a great place to work

    - by Rob Farley
    Co-incidentally, I’ve been looking for office space for LobsterPot Solutions during the same few days that Luke Hayler ( @lukehayler ) has asked for my thoughts (okay, he ‘tagged’ me) on what makes a great place to work . He lists People and Environment, and I’m inclined to agree, but with a couple of other things too. I have three children. Two of them (both boys) are in school, but my daughter is only two. For the boys’ schools, we quickly realised that what they need most is a feeling of safety...(read more)

    Read the article

  • Believeland

    - by AllenMWhite
    My daughter sent me this link to an article on ESPN called Believeland , which is a bit of a Tolstoy, but gives you a bit of the feel of what it's like to be from Cleveland. We love our city, even as many of us leave it for greener (and warmer) pastures elsewhere. One of the things I hope you'll find when you come here for SQL Saturday 60 on February 5, is what a great place it is to live here. Our call for speakers is open until Sunday. I hope to see you here. Allen...(read more)

    Read the article

  • Did You Know? More online seminars!

    - by Kalen Delaney
    I am in Tucson again, having just recorded two more online workshops to be broadcast by SSWUG. We haven't set the dates yet, but we are thinking about offering a special package deal for the two of them. The topics really are related and I think they would work well together. They are both on aspects of Query Processing. The first was on how to interpret Query Plans and is an introduction to the topic. However, it only includes a discussion of how SQL Server actually processes your queries. For example,...(read more)

    Read the article

  • SQLSaturday #69 - Philly Love

    - by Mike C
    Thanks to the Philly SQL Server User Group (PSSUG) and to everyone who attended SQLSaturday #69 in the City of Brotherly Love yesterday. It was a great event with a lot of great people. My presentations are available for download at the links below: http://www.sqlsaturday.com/viewsession.aspx?sat=69&sessionid=3333 http://www.sqlsaturday.com/viewsession.aspx?sat=69&sessionid=3334 I just went through my speaker evaluations, and I'm happy to report the response was pretty positive across the...(read more)

    Read the article

  • Disaster, or Migration?

    - by Rob Farley
    This post is in two parts – technical and personal. And I should point out that it’s prompted in part by this month’s T-SQL Tuesday, hosted by Allen Kinsel. First, the technical: I’ve had a few conversations with people recently about migration – moving a SQL Server database from one box to another (sometimes, but not primarily, involving an upgrade). One question that tends to come up is that of downtime. Obviously there will be some period of time between the old server being available and the new one. The way that most people seem to think of migration is this: Build a new server. Stop people from using the old server. Take a backup of the old server Restore it on the new server. Reconfigure the client applications (or alternatively, configure the new server to use the same address as the old) Make the new server online. There are other things involved, such as testing, of course. But this is essentially the process that people tell me they’re planning to follow. The bit that I want to look at today (as you’ve probably guessed from my title) is the “backup and restore” section. If a SQL database is using the Simple Recovery Model, then the only restore option is the last database backup. This backup could be full or differential. The transaction log never gets backed up in the Simple Recovery Model. Instead, it truncates regularly to stay small. One that’s using the Full Recovery Model (or Bulk-Logged) won’t truncate its log – the log must be backed up regularly. This provides the benefit of having a lot more option available for restores. It’s a requirement for most systems of High Availability, because if you’re making sure that a spare box is up-and-running, ready to take over, then you have to be interested in the logs that are happening on the current box, rather than truncating them all the time. A High Availability system such as Mirroring, Replication or Log Shipping will initialise the spare machine by restoring a full database backup (and maybe a differential backup if available), and then any subsequent log backups. Once the secondary copy is close, transactions can be applied to keep the two in sync. The main aspect of any High Availability system is to have a redundant system that is ready to take over. So the similarity for migration should be obvious. If you need to move a database from one box to another, then introducing a High Availability mechanism can help. By turning on the Full Recovery Model and then taking a backup (so that the now-interesting logs have some context), logs start being kept, and are therefore available for getting the new box ready (even if it’s an upgraded version). When the migration is ready to occur, a failover can be done, letting the new server take over the responsibility of the old, just as if a disaster had happened. Except that this is a planned failover, not a disaster at all. There’s a fine line between a disaster and a migration. Failovers can be useful in patching, upgrading, maintenance, and more. Hopefully, even an unexpected disaster can be seen as just another failover, and there can be an opportunity there – perhaps to get some work done on the principal server to increase robustness. And if I’ve just set up a High Availability system for even the simplest of databases, it’s not necessarily a bad thing. :) So now the personal: It’s been an interesting time recently... June has been somewhat odd. A court case with which I was involved got resolved (through mediation). I can’t go into details, but my lawyers tell me that I’m allowed to say how I feel about it. The answer is ‘lousy’. I don’t regret pursuing it as long as I did – but in the end I had to make a decision regarding the commerciality of letting it continue, and I’m going to look forward to the days when the kind of money I spent on my lawyers is small change. Mind you, if I had a similar situation with an employer, I’d do the same again, but that doesn’t really stop me feeling frustrated about it. The following day I had to fly to country Victoria to see my grandmother, who wasn’t expected to last the weekend. She’s still around a week later as I write this, but her 92-year-old body has basically given up on her. She’s been a Christian all her life, and is looking forward to eternity. We’ll all miss her though, and it’s hard to see my family grieving. Then on Tuesday, I was driving back to the airport with my family to come home, when something really bizarre happened. We were travelling down the freeway, just pulled out to go past a truck (farm-truck sized, not a semi-trailer), when a car-sized mass of metal fell off it. It was something like an industrial air-conditioner, but from where I was sitting, it was just a mass of spinning metal, like something out of a movie (one friend described it as “holidays by Michael Bay”). Somehow, and I’m really don’t know how, the part of it nearest us bounced high enough to clear the car, and there wasn’t even a scratch. We pulled over the check, and I was just thanking God that we’d changed lanes when we had, and that we remained unharmed. I had all kinds of thoughts about what could’ve happened if we’d had something that size land on the windscreen... All this has drilled home that while I feel that I haven’t provided as well for the family as I could’ve done (like by pursuing an expensive legal case), I shouldn’t even consider that I have proper control over things. I get to live life, and make decisions based on what I feel is right at the time. But I’m not going to get everything right, and there will be things that feel like disasters, some which could’ve been in my control and some which are very much beyond my control. The case feels like something I could’ve pursued differently, a disaster that could’ve been avoided in some way. Gran dying is lousy of course. An accident on the freeway would have been awful. I need to recognise that the worst disasters are ones that I can’t affect, and that I need to look at things in context – perhaps seeing everything that happens as a migration instead. Life is never the same from one day to the next. Every event has a before and an after – sometimes it’s clearly positive, sometimes it’s not. I remember good events in my life (such as my wedding), and bad (such as the loss of my father when I was ten, or the back injury I had eight years ago). I’m not suggesting that I know how to view everything from the “God works all things for good” perspective, but I am trying to look at last week as a migration of sorts. Those things are behind me now, and the future is in God’s hands. Hopefully I’ve learned things, and will be able to live accordingly. I’ve come through this time now, and even though I’ll miss Gran, I’ll see her again one day, and the future is bright.

    Read the article

  • Blogging from the PASS Summit : WIT Luncheon

    - by AaronBertrand
    SQL Sentry is very proud to sponsor the 10th annual Women in Technology Luncheon at the PASS Summit. Probably 700 people in here - pretty crowded house. This luncheon is growing year over year and is always a refreshing and interesting event to attend. Bill Graziano kicks things off and introduces our moderator, Wendy Pastrick. The panel is made up of Stefanie Higgins (actually the founder of the WIT Luncheon event), Denise McInerney, Kevin Kline, Jen Stirrup and Kendra Little. Stefanie talked about...(read more)

    Read the article

  • What permissions are required for SET IDENTITY_INSERT ON?

    - by AaronBertrand
    SQL Server 2000's SET IDENTITY_INSERT ON topic says: Execute permissions default to the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and the object owner. While the SET IDENTITY_INSERT topic for SQL Server 2005 (and up) says: User must own the object, or be a member of the sysadmin fixed server role, or the db_owner and db_ddladmin fixed database roles. This was clearly adapted from the 2000 books online and re-written by someone who misinterpreted "db_owner...(read more)

    Read the article

  • SQL Server 2008 R2: StreamInsight - User-defined aggregates

    - by Greg Low
    I'd briefly played around with user-defined aggregates in StreamInsight with CTP3 but when I started working with the new Count Windows, I found I had to have one working. I learned a few things along the way that I hope will help someone. The first thing you have to do is define a class: public class IntegerAverage : CepAggregate < int , int > { public override int GenerateOutput( IEnumerable < int > eventData) { if (eventData.Count() == 0) { return 0; } else { return eventData.Sum()...(read more)

    Read the article

  • The 2013 PASS Summit - Day 2

    - by AllenMWhite
    Good morning! It's Day 2 of the PASS Summit 2013 and it should be a busy one. Douglas McDowell, EVP Finance of PASS opened up the keynote to welcome people and talked about the financial status of the organization. Last year's Business Analytics Conference left the organization $100,000 ahead, and he went on to show the overall financial health, which is very good at this point. Bill Graziano came out to thank Doug, Rob Farley and Rushabh Mehta for their service on the board, as they step down from...(read more)

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >