Search Results

Search found 10442 results on 418 pages for 'my blog'.

Page 116/418 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Memories about Tadeusz Golonka

    - by Damian
    Today at 10:55 AM, Tadeusz Golonka - my greatest  Mentor and Teacher  passed away. I had te opportunity to met Tadek in person several times last years. It was always a great experience to see how he shared his energy and passion. I was always impressed and had a lot of new ideas after such meeting or lecture. I can remember the meeting  in early 2009 and his briliant speech he did for us, the MVP community in Poland. We spent two days together and he talked to us all the time. He gave us examples how to share IT passion to other people and how to be better person for others. He was the greates Mentor I have ever met - I realized this during that meeting. My greates dream was and still is to be "like Tadek". Many Times I just went to events to see / hear him on stage ("in action"). I always wanted to have his energy, empathy and passion. Now I have to live without his good words and advices....Let me put here the words that Adam Cogan wrote on Tadek's profile on Facebook. I just can't write about that fatal accident. "The circumstances of Tadeusz Golonka death are too tragic. Tad stood up to offer his seat to an elderly lady, he lost his balance and then he slipped and hit the tram door hard. He then fell out of the tram and hit the metal barriers that separate the tram rails from the street. It was a severe accident...... So horrible.  At first it was a miracle is that he survived... he fought for several days.  My thoughts are with his lovely family. The family have asked for blood donations as a symbolic gift. Tad received a lot of blood.  Thank you Tad, you were a wonderful person. I will remember you as a kind man, a gentleman. "RIP Tadeusz- You will never ever be forgotten. You are with us all the time  

    Read the article

  • Solving security issue in PowerPivot for SharePoint and Power View

    - by Marco Russo (SQLBI)
    I just installed a brand new server (well, a virtual machine) with SharePoint 2010 SP1 and SQL Server 2012 RC0, including PowerPivot and Reporting Services / Power View. The server is joined to the domain I use in our development environment. I published a workbook in the PowerPivot Gallery and my user was immediately able to connect, browse and navigate data of the Excel workbook published by SharePoint. Moreover, I was able to open it in Power View. However, other users failed the connection. After...(read more)

    Read the article

  • Big Data Learning Resources

    - by Lara Rubbelke
    I have recently had several requests from people asking for resources to learn about Big Data and Hadoop. Below is a list of resources that I typically recommend. I'll update this list as I find more resources. Let's crowdsource this... Tell me your favorite resources and I'll get them on the list! Books and Whitepapers Planning for Big Data Free e-book Great primer on the general Big Data space. This is always my recommendation for people who are new to Big Data and are trying to understand it....(read more)

    Read the article

  • Tracking first visit date in Google Analytics

    - by dusan
    I want track a site's visitor retention using Google Analytics, to see if unregistered users are returning to it, within a time frame of 2+ months from now. This blog post seems to be on the right track, but I want to track unregistered users, so I don't have a "join date" or similar variable at my disposal. This other blog post suggests using all 5 GA custom variables, using the first variable slot on the first week, variable 2 on week 2, etc. This method will allow me to track 5 weeks of visitors. I want to track more than 5 weeks of visitors, so I was thinking on using two custom variables in GA: visitor's first visit date, and visitor's last visit date. How I can save the first visit date? Because if I save another value in the same slot the old value will be overwritten, and I don't know how reliable is to save that variable conditionally (reading the __utmv cookie to check whether the "first visit date" is set, if it isn't set I save the current date)

    Read the article

  • Hilarious

    - by James Luetkehoelter
    I don't know how many of you know about this site, but it raises my spirits on a daily basis. I found today's entry oddly familiar... http://thedailywtf.com/Articles/sp_getNothing.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Updatable columnstore index, sp_spaceused and sys.partitions

    - by Michael Zilberstein
    Columnstore index in SQL Server 2014 contains 2 new important features: it can be clustered and it is updateable. So I decided to play with both. As a “control group” I’ve taken my old columnstore index demo from one of the ISUG (Israeli SQL Server Usergroup) sessions. The script itself isn’t important – it creates partition function with 7 partitions (actually 8 but one remains empty), table on it and populates the table with 63 million rows – 9 million in each partition. So I used the same script...(read more)

    Read the article

  • Where's the source code?

    - by Kyle Burns
    I've been contacted by several people through this blog asking about the missing source code for the "Beginning Windows 8 Application Development - XAML Edition" book (the book is available at http://www.amazon.com/gp/product/1430245662/http://www.amazon.com/gp/product/1430245662/) and wanted to share this with others who may have come to this blog looking for it but may not have communicated with me.  The publisher (Apress) does know that the source code is not posted on the book's product page and will be correcting it.  Apress is located in New York City and things were slowed down a little bit last week due to the storm, but I've been assured they will be correcting the product page as soon as they can.  Thanks to everyone who has bought the book and I especially appreciate your patience.

    Read the article

  • Upcoming Database Design Pre-Cons

    - by drsql
    In July and October, I will be doing my "How To Design a Relational Database" full day conference in two places. First on July 26 for the East Iowa SQL Saturday , and then for the big daddy SQLPASS Summit in Charlotte, NC on October 14. You can see the entire abstract here on the SQL PASS site. It is essentially the same concept as last year, but this year I am making a few big changes to really give the people what they have desired (and am truly glad to have a swing at it several months...(read more)

    Read the article

  • Holiday Stress

    - by andyleonard
    Photo by Brian J. Matis Ever have one of these days? I have. According to studies like this one , I am not alone. This is a time of year when vacations loom right alongside project deadlines. There are parties to attend, additional expenses and work around the house, decisions about what to do for whom, and more. If you celebrate by decorating a house, tree, or lawn with lights; you may find yourself fighting them like the young lady pictured here! Stress at work, stress at home – stress everywhere!...(read more)

    Read the article

  • What types of objects are useful in SQL CLR?

    - by Greg Low
    I've had a number of people over the years ask about whether or not a particular type of object is a good candidate for SQL CLR integration. The rules that I normally apply are as follows: Database Object Transact-SQL Managed Code Scalar UDF Generally poor performance Good option when limited or no data-access Table-valued UDF Good option if data-related Good option when limited or no data-access Stored Procedure Good option Good option when external access is required or limited data access DML...(read more)

    Read the article

  • Deletes that Split Pages and Forwarded Ghosts

    - by Paul White
    Can DELETE operations cause pages to split? Yes. It sounds counter-intuitive on the face of it; deleting rows frees up space on a page, and page splitting occurs when a page needs additional space. Nevertheless, there are circumstances when deleting rows causes them to expand before they can be deleted. The mechanism at work here is row versioning (extract from Books Online below): Isolation Levels The relationship between row-versioning isolation levels (the first bullet point) and page splits is...(read more)

    Read the article

  • 302 Redirect causes garbage at end of Wordpress link in Facebook

    - by Joao
    When I try to link my Wordpress blog to Facebook, the url doesn't resolve properly. There's garbage appended at the end and Facebook is not able to retrieve information from the site. Happens in every page, post or main entry. Here's what happens: http://clarissarezende.com.br/ shows up in Facebook as http://clarissarezende.com.br/UPLcS/ (when copy/paste the link) and no information about the site shows up in FB. I'm using Wordpress 3.3.1 with ProPhoto 4. Recently I moved the DNS entry on my ISP. The blog is hosted at clarissarezende.com.br/public_html/blog2 and before the DNS would point to public_html and then I changed it to public_html/blog2. Note that I did not move any Wordpress files. Made the (I think) necessary changes all over Facebook, but still no dice... Any ideas on what can be happening?

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • A bacon- (and module-) saving PowerShell incident

    - by AaronBertrand
    Earlier today I made a big goof. I opened a module in Notepad, intending to use it as the basis for a new module. I was in the process of using "File > Save As" when my phone rang just at the precise instant that, for some reason, made me click on "File > Save" by mistake. After hitting Ctrl+Z 30 times to try to get the old version of the module back, I remembered that Notepad has never had more than one level of Undo. Back when I was coding ASP by hand, I was very well aware of this, but I...(read more)

    Read the article

  • I am not speaking at SQL Connections February 2011 meeting in Chicago suburbs

    - by Alexander Kuznetsov
    Usually it is an honor when we get to present to a user group, but not this time, so let me explain. I have no idea how my presentation got briefly mentioned in the invitation which went out today, without my consent. I have never asked or agreed to speak at SQL Connections February 2011 meeting in Chicago suburbs. Yet I apologize for any inconvenience it might have caused. I was going to speak at the meeting of December 2010, which was agreed by email with the person in charge. I had spent some...(read more)

    Read the article

  • Approaching events #mstc11 #ppws #sqlbits

    - by Marco Russo (SQLBI)
    The spring season is always full of events and I’m just preparing for a number of them. First of all, we are getting very good interest for the PowerPivot Workshop in Copenhagen on 21-22 March 2011. Tomorrow (Friday March 4) will be the last day to take advantage of the Early Bird rate for this date. We will also participate to an evening meeting of local user groups on March 21 in Copenhagen, more news about this in the next few days. Other scheduled dates are in Dublin (28-29 March 2011) and in...(read more)

    Read the article

  • Avoiding connection timeouts on first connection to LocalDB edition of SQL Server Express

    - by Greg Low
    When you first make a connection to the new LocalDB edition of SQL Server Express, the system files, etc. that are required for a new version are spun up. (The system files such as the master database files, etc. end up in C:\Users\<username>\AppData\Local\Microsoft\Microsoft SQL Server Local DB\Instances\LocalDBApp1) That can take a while on a slower machine, so this means that the default connection timeout of 30 seconds (in most client libraries) could be exceeded. To avoid this hit on the...(read more)

    Read the article

  • Using Alt-select in SSMS, Word, and elsewhere

    - by John Paul Cook
    A surprising number of database people and Windows users in general don’t know about Alt select . This is a Windows technique not unique to SSMS that allows a user to select an arbitrary rectangular region of text and delete it, cut it, or copy it. Where I find Alt select particularly useful in SSMS is when I have a bunch of inline comments that are too far to the right. I want to delete much of the whitespace in front of them to move them to the left without disturbing any of the rest of the T-SQL....(read more)

    Read the article

  • Book Review: Expert Cube Development with Microsoft SQL Server 2008 Analysis Services

    - by Greg Low
    I spent last week on campus in Redmond with the SQL Server Analysis Services Maestro program. It was great to have a chance to focus on SSAS for a week. As part of that, I did quite a bit of reading as I had quite a bit of travelling time. Ironically, I re-read a few books. The first was Marco Russo, Alberto Ferrari and Chris Webb's book Expert Cube Development with Microsoft SQL Server 2008 Analysis Services . I've often told BI classes that I've been teaching that this is a really good book and...(read more)

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

  • But what version is the database now?

    - by BuckWoody
    When you upgrade your system to SQL Server 2008 R2, you’ll know that the instance is at that version by using the standard commands like SELECT @@VERSION or EXEC xp_msver. My system came back with this info when I typed those: Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86)   Apr  2 2010 15:53:02   Copyright (c) Microsoft Corporation  Developer Edition on Windows NT 6.0 <X86> (Build 6002: Service Pack 2) (Hypervisor) Index Name Internal_Value Character_Value 1 ProductName NULL Microsoft SQL Server 2 ProductVersion 655410 10.50.1600.1 3 Language 1033 English (United States) 4 Platform NULL NT INTEL X86 5 Comments NULL SQL 6 CompanyName NULL Microsoft Corporation 7 FileDescription NULL SQL Server Windows NT 8 FileVersion NULL 2009.0100.1600.01 ((KJ_RTM).100402-1540 ) 9 InternalName NULL SQLSERVR 10 LegalCopyright NULL Microsoft Corp. All rights reserved. 11 LegalTrademarks NULL Microsoft SQL Server is a registered trademark of Microsoft Corporation. 12 OriginalFilename NULL SQLSERVR.EXE 13 PrivateBuild NULL NULL 14 SpecialBuild 104857601 NULL 15 WindowsVersion 393347078 6.0 (6002) 16 ProcessorCount 1 1 17 ProcessorActiveMask 1 1 18 ProcessorType 586 PROCESSOR_INTEL_PENTIUM 19 PhysicalMemory 2047 2047 (2146934784) 20 Product ID NULL NULL   But a database properties are separate from the Instance. After an upgrade, you always want to make sure that the compatibility options (which have much to do with how NULLs and other objects are treated) is at what you expect. For the most part, as long as the application can handle it, I set my compatibility levels to the latest version. For SQL Server 2008, that was “10.0” or “10”. You can do this with the ALTER DATABASE command or you can just right-click the database and select “Properties” and then “Database Options” in SQL Server Management Studio. To check the database compatibility level, I use this query: SELECT name, cmptlevel FROM sys.sysdatabases When I did that this morning I saw that the databases (all of them) were at 10.0 – not 10.5 like the Instance. That’s expected – we didn’t revise the database format up with the Instance for this particular release. Didn’t want to catch you by surprise on that. While your databases should be at the “proper” level for your situation, you can’t rely on the compatibility level to indicate the Instance level. More info on the ALTER DATABASE command in SQL Server 2008 R2 is here: http://technet.microsoft.com/en-us/library/bb510680(SQL.105).aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 3)

    - by Hugo Kornelis
    I showed why T-SQL scalar user-defined functions are bad for performance in two previous posts. In this post, I will show that CLR scalar user-defined functions are bad as well (though not always quite as bad as T-SQL scalar user-defined functions). I will admit that I had not really planned to cover CLR in this series. But shortly after publishing the first part , I received an email from Adam Machanic , which basically said that I should make clear that the information in that post does not apply...(read more)

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >