Search Results

Search found 18534 results on 742 pages for 'dave long'.

Page 67/742 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • Files in /home deleted

    - by long-time user....2006
    In the most specific, unemotional terms: Reinstalled os, using 11.10(1 month after release to skip initial issues that usually crop up). Configured system to my specifications(just ways of organizing config files, etc). Log out Log back in after after an hour or so...to find my home directory obliterated and just a few skeleton files existing. think oh well, try again (this has happened before with an install for reasons I've never been able to pinpoint, usually around install time with some sort of update but its never been a major recurring issue) same thing happens I thought something was awry, so I reinstalled again (another 20 minutes, meh) Set up system, arranged home directory a bit differently thinking maybe I tread on something I shouldn't have. log out, come back --- the same thing. Most of the directories I added were deleted (e.g. .xmonad which links to xmonad.hs in my portable config directory) tl;dr every change I make in my home directory gets deleted. The emotional part: UNACCEPTABLE. I need to configure my system the way I want, not get punched in the face everytime I make a change. I'll willingly fill in details as needed, this was just a start to see if anyone can help, I've found no trace of this issue in a search.

    Read the article

  • SQL SERVER – Learn SQL Server 2014 Online in a Day – My Latest Pluralsight Course

    - by Pinal Dave
    Click here watch SQL Server 2014 Administration New Features.  SQL Server 2014 was released earlier this year and it has been extremely popular in Microsoft world. Here is the announcement for everyone, who have been asking me to build a tutorial around SQL Server 2014. I have authored latest Pluralsight courses on the subject of SQL Server 2014. This course is 4 hours and 17 minutes long, but the best part is that this course contains all the latest features of SQL Server 2014. I have build this course with the assumption that DBA is familiar with earlier versions of SQL Server and wants to explore and learn new features of SQL Server 2014. The Challenge I Faced The biggest challenge I faced was how to come up with the outline for the course. The reason is that there are so many different features introduced in SQL Server 2014 that is will be difficult to cover each of the features in a single course. I wanted to cover the topics which are the most relevant and useful to developers, but in addition I also wanted to cover the topics which may be useful to develop if they know that they exists in the product. I finally decided to depend on blog readers and few of the SQL Experts. I reached out to selected 20 people via email and gave them a list of the topics which I should be covering in this course. They all work in different organizations and have a good understanding about the need of the DBA and Developers. Based on their feedback, I was able to come up with a very good outline which is currently very popular with Pluralsight library. Lots of people have asked me how was I able to come up with a course content outline so accurately. The credit for the same goes to the developers and DBA, who have voted in the topics and have helped me to build a very solid outline for the course. Outline of the Course Here is a quick outline for the course: Introduction Backup Enhancements Security Enhancements Columnstore Enhancements Online Data Operations Enhancements Enhancements with Microsoft Azure SSD Buffer Pool Extensions Resource Governor IO Miscellaneous Features Online Index Rebuilding Live Plans for Long Running Queries Transaction Durability Cardinality Estimation In Memory OLTP Optimization Well, I had a great fun working on the topics which I have mentioned in the outline. I am very confident that once you start with the course, you will indeed understand how each of the topics builds and presented. I have made sure that each of the topic has a vivid and clear story to begin with. I first explain the story and right after that I explain the concept. Who Should Attend This Course Everyone who has basic knowledge of SQL Server and wants to update themselves with SQL Server 2014. They should attend this course. One thing I have made sure that this course is easy to understand and I have decided complex subject into multiple parts. This way the learning is progressive and anyone with a poor knowledge of the subject can have enough time to understand the presented concept. Screenshot of the Course Here are few of the screenshot of the courses. How to Watch Video Course This course is available at Pluralsight, and you will need a valid login to Pluralsight. If you do not have Pluralsight login, you can quickly sign up for the FREE Trial. Click here watch SQL Server 2014 Administration New Features.  Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

  • Silverlight Cream for May 27, 2010 -- #871

    - by Dave Campbell
    In this Issue: Phil Middlemiss, Max Paulousky, Jeff Wilcox, David Anson, René Schulte, Xianzhong Zhu, Jeff Handley, John Papa, Jeremy Likness, and Marlon Grech. Shoutouts: SilverLaw has a great demo at the Expression Gallery, and we're all going to look forward to the blog post explaining it: Flexible Surface Effect SilverLaw> has another use for the above in this text morphing Effect: Morphing Text Effect Matthias Shapiro contributed a chapter for a book on Visualization and it's available as a free download: Free Chapter From Beautiful Visualization Andy Beaulieu has a demo up as almost a spoiler for a future Coding4Fun app... and how cool is this: Shuffleboard: A Windows Phone 7 Sample Game From SilverlightCream.com: Separating Content and Presentation with the ContentControl Phil Middlemiss' latest is out on SilverlightShow and is all about the ContentControl and separating layout and content ... demo project source included Search Engine Optimization (SEO) for Silverlight Applications. Part 1 Max Paulousky has part one of a long series he's starting on a demo project to explain a bunch of MEF, MVVM, and WCF RIA concepts. This first one contains the overview and also discusses SEO. There is a link to the app and material in the post if you read Russian :) Updated Silverlight Unit Test Framework bits for Windows Phone and Silverlight 3 Jeff Wilcox has available updated Unit Test bits for Silverlight 3 -- read that as WP7... read the rest of the information on his post. Easily animate orientation changes for any Windows Phone application with this handy source code David Anson has some code up that you're going to want if you're programming WP7 ... just watch the video ... you'll be downloading the code just like I did :) SilverShader – Introduction to Silverlight and WPF Pixel Shaders René Schulte has a post up at Coding4Fun about PixelShaders... how to write them and an application that uses them... this is a great long tutorial... a must read. Developing Freecell Game Using Silverlight 3 Part 2 Xianzhong Zhu has part 2 of his FreeCell game development posted ... lots of detailed descriptions and code, plus all the code of course! Async Validation with RIA Services Jeff Handley has a post up that is sort of a follow-on to a year-old post on async validation with RIA services and DataForm and how it's all much easier now in SL4. Learning Blend with .toolbox (Silverlight TV #29) John Papa and Arturo Toledo discuss .toolbox in Silverlight TV #29 -- have you made yourself an avatar yet? ... well go get on-board with this great learning tool! Silverlight Out of Browser Dynamic Modules in Offline Mode OOB isn't difficult, dynamic modules can become a bit more, but what if you're OOB... ok what if you're OOB and offline? ... Jeremy Likness has a possible solution for this with an OfflineCatalog. MEFedMVVM v1.0 Explained Marlon Grech has a great into to MEFedMVVM in this post. If you're trying to get your head around MEF and MVVM in either WPF or Silverlight, here's a good starting point. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Creating an Objective-C++ Static Library in Xcode

    - by helixed
    So I've developed an engine for the iPhone with which I'd like to build a couple different games. Rather than copy and paste the files for the engine inside of each game's project directory, I'd a way to link to the engine from each game, so if I need to make a change to it I only have to do so once. After reeding around a little bit, it seems like static libraries are the best way to do this on the iPhone. I created a new project called Skeleton and copied all of my engine files over to it. I used this guide to create a static library, and I imported the library into a project called Chooser. However, when I tried to compile the project, Xcode started complaining about some C++ data structures I included in a file called ControlScene.mm. Here's my build errors: "operator delete(void*)", referenced from: -[ControlScene dealloc] in libSkeleton.a(ControlScene.o) -[ControlScene init] in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::deallocate(operation_t*, unsigned long)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t*>::deallocate(operation_t**, unsigned long)in libSkeleton.a(ControlScene.o) "operator new(unsigned long)", referenced from: -[ControlScene init] in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t*>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) "std::__throw_bad_alloc()", referenced from: __gnu_cxx::new_allocator<operation_t*>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) "___cxa_rethrow", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) "___cxa_end_catch", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) "___gxx_personality_v0", referenced from: ___gxx_personality_v0$non_lazy_ptr in libSkeleton.a(ControlScene.o) ___gxx_personality_v0$non_lazy_ptr in libSkeleton.a(MenuLayer.o) "___cxa_begin_catch", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) ld: symbol(s) not found collect2: ld returned 1 exit status If anybody could offer some insight as to why these problems are occuring, I'd appreciate it. Thanks, helixed

    Read the article

  • Adding a valuetype to IDL, compile and it fails with "No factory found"

    - by jim
    I can't figure out why the client keeps complaining about the not finding the factory method. I've tried the IDL with and without the "factory" keyword and that didn't change the behavior. The SDMGeoVT IDL matches other objects used (which run successfully). The SDMGeoVT classes generated match other generated classes in regards to inheritance and methods. The IDL is as follows; The idlj compiler runs w/o error. I implement the function on the server and I see the server code run and serialize the object over the wire (the server code runs fine). The client bombs with the following stack trace (the first couple of lines is from the jacORB library). I've created a small app just to compile and test the code (ArrayClient & ArrayServer). The base app (from the jacORB demo) works fine. I've tried using the base class OFBaseVT and a single object (SDMGeoVT vs the list return) and have the same issue. 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) 2010-05-27 15:37:11.813 FINE read GIOP message of size 100 from ClientGIOPConnection to 127.0.0.1:47030 (1e4853f) org.omg.CORBA.MARSHAL: No factory found for: IDL:pl/SDMGeoVT:1.0 at org.jacorb.orb.CDRInputStream.read_untyped_value(CDRInputStream.java:2906) at org.jacorb.orb.CDRInputStream.read_typed_value(CDRInputStream.java:3082) at org.jacorb.orb.CDRInputStream.read_value(CDRInputStream.java:2679) at com.helloworld.pl.SDMGeoVTHelper.read(SDMGeoVTHelper.java:106) at com.helloworld.pl.SDMGeoVTListHelper.read(SDMGeoVTListHelper.java:51) at com.helloworld.pl._PLManagerIFStub.getSDMGeos(_PLManagerIFStub.java:28) at com.helloworld.ArrayClient.<init>(ArrayClient.java:40) at com.helloworld.ArrayClient.main(ArrayClient.java:125) valuetype SDMGeoVT : common::OFBaseVT{ private string sdmName; private string zip; private string atz; private long long primaryDeptId; private string deptName; factory instance(in string name,in string ZIP,in string ATZ,in long long primaryDeptId,in string deptName,in string name); string getZIP(); void setZIP(in string ZIP); string getATZ(); void setATZ(in string ATZ); long long getPrimaryDeptId(); void setPrimaryDeptId(in long long primaryDeptId); string getDeptName(); void setDeptName(in string deptName); }; typedef sequence<SDMGeoVT> SDMGeoVTList; interface PLManagerIF : PublicManagerIF { pl::SDMGeoVTList getSDMGeos(in framework::ITransactionHandle tHandle, in long long productionLocationId); };

    Read the article

  • How to return a single variable from a CUDA kernel function?

    - by Pooya
    I have a CUDA search function which calculate one single variable. How can I return it back. global void G_SearchByNameID(node* Node, long nodeCount, long start,char* dest, long answer){ answer = 2; } cudaMemcpy(h_answer, d_answer, sizeof(long), cudaMemcpyDeviceToHost); cudaFree(d_answer); for both of these lines I get this error: error: argument of type "long" is incompatible with parameter of type "const void *"

    Read the article

  • Call/Ret in x86 assembly embedded in C++

    - by SP658
    This is probably trivial, but for some reason I can't it to work. Its supposed to be a simple function that changes the last byte of a dword to 'AA' (10101010), but nothing happens when I call the function. It just returns my original dword __declspec(naked) long function(unsigned long inputDWord, unsigned long *outputDWord) { _asm{ mov ebx, dword ptr[esp+4] push ebx call SET_AA pop ebx mov eax, dword ptr[esp+8] mov dword ptr[eax], ebx } } __declspec(naked) unsigned long SET_AA( unsigned long inputDWord ) { __asm{ mov eax, [esp+4] mov al, 0xAA ret } }

    Read the article

  • Problem with Precision floating point operation in C

    - by Microkernel
    Hi Guys, For one of my course project I started implementing "Naive Bayesian classifier" in C. My project is to implement a document classifier application (especially Spam) using huge training data. Now I have problem implementing the algorithm because of the limitations in the C's datatype. ( Algorithm I am using is given here, http://en.wikipedia.org/wiki/Bayesian_spam_filtering ) PROBLEM STATEMENT: The algorithm involves taking each word in a document and calculating probability of it being spam word. If p1, p2 p3 .... pn are probabilities of word-1, 2, 3 ... n. The probability of doc being spam or not is calculated using Here, probability value can be very easily around 0.01. So even if I use datatype "double" my calculation will go for a toss. To confirm this I wrote a sample code given below. #define PROBABILITY_OF_UNLIKELY_SPAM_WORD (0.01) #define PROBABILITY_OF_MOSTLY_SPAM_WORD (0.99) int main() { int index; long double numerator = 1.0; long double denom1 = 1.0, denom2 = 1.0; long double doc_spam_prob; /* Simulating FEW unlikely spam words */ for(index = 0; index < 162; index++) { numerator = numerator*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom1 = denom1*(long double)(1 - PROBABILITY_OF_UNLIKELY_SPAM_WORD); } /* Simulating lot of mostly definite spam words */ for (index = 0; index < 1000; index++) { numerator = numerator*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom1 = denom1*(long double)(1- PROBABILITY_OF_MOSTLY_SPAM_WORD); } doc_spam_prob= (numerator/(denom1+denom2)); return 0; } I tried Float, double and even long double datatypes but still same problem. Hence, say in a 100K words document I am analyzing, if just 162 words are having 1% spam probability and remaining 99838 are conspicuously spam words, then still my app will say it as Not Spam doc because of Precision error (as numerator easily goes to ZERO)!!!. This is the first time I am hitting such issue. So how exactly should this problem be tackled?

    Read the article

  • Preoblem with Precision floating point operation in C

    - by Microkernel
    Hi Guys, For one of my course project I started implementing "Naive Bayesian classifier" in C. My project is to implement a document classifier application (especially Spam) using huge training data. Now I have problem implementing the algorithm because of the limitations in the C's datatype. ( Algorithm I am using is given here, http://en.wikipedia.org/wiki/Bayesian_spam_filtering ) PROBLEM STATEMENT: The algorithm involves taking each word in a document and calculating probability of it being spam word. If p1, p2 p3 .... pn are probabilities of word-1, 2, 3 ... n. The probability of doc being spam or not is calculated using Here, probability value can be very easily around 0.01. So even if I use datatype "double" my calculation will go for a toss. To confirm this I wrote a sample code given below. #define PROBABILITY_OF_UNLIKELY_SPAM_WORD (0.01) #define PROBABILITY_OF_MOSTLY_SPAM_WORD (0.99) int main() { int index; long double numerator = 1.0; long double denom1 = 1.0, denom2 = 1.0; long double doc_spam_prob; /* Simulating FEW unlikely spam words */ for(index = 0; index < 162; index++) { numerator = numerator*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD; denom1 = denom1*(long double)(1 - PROBABILITY_OF_UNLIKELY_SPAM_WORD); } /* Simulating lot of mostly definite spam words */ for (index = 0; index < 1000; index++) { numerator = numerator*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom2 = denom2*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD; denom1 = denom1*(long double)(1- PROBABILITY_OF_MOSTLY_SPAM_WORD); } doc_spam_prob= (numerator/(denom1+denom2)); return 0; } I tried Float, double and even long double datatypes but still same problem. Hence, say in a 100K words document I am analyzing, if just 162 words are having 1% spam probability and remaining 99838 are conspicuously spam words, then still my app will say it as Not Spam doc because of Precision error (as numerator easily goes to ZERO)!!!. This is the first time I am hitting such issue. So how exactly should this problem be tackled?

    Read the article

  • VirtualQuery gives illegal result. Is it a bug?

    - by Shimon Newman
    My code: MEMORY_BASIC_INFORMATION meminf; ::VirtualQuery(box.pBits, &meminf, sizeof(meminf)); The results: meminf: BaseAddress 0x40001000 void * AllocationBase 0x00000000 void * AllocationProtect 0x00000000 unsigned long RegionSize 0x0de0f000 unsigned long State 0x00010000 unsigned long Protect 0x00000001 unsigned long Type 0x00000000 unsigned long Notes: (1) AllocationBase is NULL while BaseAddress is not NULL (2) AllocationProtect is 0 (not a protection value) Is it a bug of VirtualQuery?

    Read the article

  • Parsing command line options in Perl

    - by Jay Gridley
    Hi guys, I am parsing command line options in Perl using Getopt::Long. I am forced to use prefix - (one dash) for short commands (-s) and -- (double dash) for long commands (ex. --input=file), but problem is, that there is one special option (-r=) so it is long option for its requirement for argument, but it has to have one dash (-) prefix not double dash (--) like other long options. Is possible to setup Getopt::Long to accept these?

    Read the article

  • Developer’s Life – Every Developer is a Batman

    - by Pinal Dave
    Batman is one of the darkest superheroes in the fantasy canon.  He does not come to his powers through any sort of magical coincidence or radioactive insect, but through a lot of psychological scarring caused by witnessing the death of his parents.  Despite his dark back story, he possesses a lot of admirable abilities that I feel bear comparison to developers. Batman has the distinct advantage that his alter ego, Bruce Wayne is a millionaire (or billionaire in today’s reboots).  This means that he can spend his time working on his athletic abilities, building a secret lair, and investing his money in cool tools.  This might not be true for developers (well, most developers), but I still think there are many parallels. So how are developers like Batman? Well, read on my list of reasons. Develop Skills Batman works on his skills.  He didn’t get the strength to scale Gotham’s skyscrapers by inheriting his powers or suffering an industrial accident.  Developers also hone their skills daily.  They might not be doing pull-ups and scaling buldings, but I think their skills are just as impressive. Clear Goals Batman is driven to build a better Gotham.  He knows that the criminal who killed his parents was a small-time thief, not a super villain – so he has larger goals in mind than simply chasing one villain.  He wants his city as a whole to be better.  Developers are also driven to make things better.  It can be easy to get hung up on one problem, but in the end it is best to focus on the well-being of the system as a whole. Ultimate Teamplayers Batman is the hero Gotham needs – even when that means appearing to be the bad guys.  Developers probably know that feeling well.  Batman takes the fall for a crime he didn’t commit, and developers often have to deliver bad news about the limitations of their networks and servers.  It’s not always a job filled with glory and thanks, but someone has to do it. Always Ready Batman and the Boy Scouts have this in common – they are always prepared.  Let’s add developers to this list.  Batman has an amazing tool belt with gadgets and gizmos, and let’s not even get into all the functions of the Batmobile!  Developers’ skills might be the knowledge and skills they have developed, not tools they can carry in a utility belt, but that doesn’t make them any less impressive. 100% Dedication Bruce Wayne cultivates the personality of a playboy, never keeping the same girlfriend for long and spending his time partying.  Even though he hides it, his driving force is his deep concern and love for his friends and the city as a whole.  Developers also care a lot about their company and employees – even when it is driving them crazy.  You do your best work when you care about your job on a personal level. Quality Output Batman believes the city deserves to be saved.  The citizens might have a love-hate relationship with both Batman and Bruce Wayne, and employees might not always appreciate developers.  Batman and developers, though, keep working for the best of everyone. I hope you are all enjoying reading about developers-as-superheroes as much as I am enjoying writing about them.  Please tell me how else developers are like Superheroes in the comments – especially if you know any developers who are faster than a speeding bullet and can leap tall buildings in a single bound. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Developer’s Life – Every Developer is a Superman

    - by Pinal Dave
    I enjoyed comparing developers to Spiderman so much, that I have decided to continue the trend and encourage some of my favorite people (developers) with another favorite superhero – Superman.  Superman is probably the most famous superhero – and one of the most inspiring. Everyone has their own favorite, but Superman has been the longest enduring of all comic book characters.  Clark Kent has inspired multiple movie series, TV shows, books, cartoons, and costumes.  Superman’s enduring popularity has been attributed to his superhuman strength, integrity, dedication to good, and his humility in keeping his identity a secret. So how are developers like Superman? Well, read on my list of reasons. Secret Identities They have secret identities.  I’m not saying that all developers wear thick glasses and go by an alias like “Clark Kent.”  But developers certainly work in the background, making sure everything runs smoothly, often without recognition.  Like Superman, when they have done their job right, no one knows they were there. Working Alone You don’t have to work alone.  Superman doesn’t have a sidekick like Robin or Bat Girl, but he is a major player in the Justice League.  Developers have amazing skills, and they shouldn’t be afraid to unite those skills to solve some of the world’s major problems (like slow networks). Daily Inspiration Developers are inspiring.  Clark Kent works at The Daily Planet, Metropolis’ newspaper, which is lucky because he can keep some of the publicity Superman inspires under wraps.  Developers might go unnoticed sometimes, but when people hear about some of the tasks they accomplish on a daily basis, it inspires awe. Discover Your Superpowers You have to discover your superpowers.  Clark Kent didn’t just wake up one morning with the full understanding that he could fly, leap tall buildings in a single bound, and was stronger than a speeding locomotive.  He slowly discovered these powers (after a few comic book-worthy misunderstandings!).  Developers are always learning and growing as well.  You probably won’t wake up with super powers, either, but years of practice and continuing education can get you close. Every Day is a New Day The story continues.  The Superman comic books are still being printed, and have been in print since 1938.  There have been two TV series, (one, Smallville, was on TV for ten seasons) and multiple cartoon adaptations.  There have been multiple movies, with many different actors.  A new reboot came out last year, and another is set to premier in 2016.   So, developers, when you are having a bad day or a problem seems unsolvable – remember, the story will continue!  There is always tomorrow. I hope you are all enjoying reading about developers-as-superheroes as much as I am enjoying writing about them.  Please tell me how else developers are like Superheroes in the comments – especially if you know any developers who are faster than a speeding bullet and can leap tall buildings in a single bound. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Developer, Superhero

    Read the article

  • Challenge 19 – An Explanation of a Query

    - by Dave Ballantyne
    I have received a number of requests for an explanation of my winning query of TSQL Challenge 19. This involved traversing a hierarchy of employees and rolling a count of orders from subordinates up to superiors. The first concept I shall address is the hierarchyId , which is constructed within the CTE called cteTree.   cteTree is a recursive cte that will expand the parent-child hierarchy of the personnel in the table @emp.  One useful feature with a recursive cte is that data can be ‘passed’ from the parent to the child data.  The hierarchyId column is similar to the hierarchyId data type that was introduced in SQL Server 2008 and represents the position of the person within the organisation. Let us start with a simplistic example Albert manages Bob and Eddie.  Bob manages Carl and Dave. The hierarchyId will represent each person’s position in this relationship in a single field.  In this simple example we could append the userID together into a varchar field as detailed below. This will enable us to select a branch of the tree by filtering using Where hierarchyId  ‘1,2%’ to select Bob and all his subordinates.  Naturally, this is not comprehensive enough to provide a full solution, but as opposed to concatenating the Id’s together into a varchar datatyped column, we can apply the same theory to a varbinary.  By CASTing the ID’s into a datatype of varbinary(4) ,4 is used as 4 bytes of data are used to store an integer and building a hierarchyId  from those.  For example: The important point to bear in mind for later in the query is that the binary data generated is 'byte order comparable'. ie We can ORDER a dataset with it and the resulting data, will be in the order required. Now, would probably be a good time to download the example file and, after the cte ‘cteTree’, uncomment the line ‘select * from cteTree’.  Mark this and all prior code and execute.  This will show you how this theory directly relates to the actual challenge data.  The only deviation from the above, is that instead of using the ID of an employee, I have used the row_number() ranking function to order each level by LastName,Firstname.  This enables me to order by the HierarchyId in the final result set so that the result set is in the required order. Your output should be something like the below.  Notice also the ‘Level’ Column that contains the depth that the employee is within the tree.  I would encourage you to ‘play’ with the query, change the order in the row_number() or the length of the cast in the hierarchyId to see how that effects the outcome.  The next cte, ‘cteTreeWithOrderCount’, is a join between cteTree and the @ord table, and COUNT’s the number of orders per employee.  A LEFT JOIN is employed here to account for the occasion where an employee has made no sales.   Executing a ‘Select * from cteTreeWithOrderCount’ will return the result set as below.  The order here is unimportant as this is only a staging point of the data and only the final result set in a cte chain needs an Order by clause, unless TOP is utilised. cteExplode joins the above result set to the tally table (Nums) for Level Occurances.  So, if level is 2 then 2 rows are required.  This is done to expand the dataset, to create a new column (PathInc), which is the (n+1) integers contained within the heirarchyid.  For example, with the data for Robert King as given above, the below 3 rows will be returned. From this you can see that the pathinc column now contains the values for Andrew Fuller and Steven Buchanan who are Robert King’s superiors within the tree.    Finally cteSumUp, sums the orders for each person and their subordinates using the PathInc generated above, and the final select does the final simple mathematics and filters to restrict the result set to only the ‘original’ row per employee.

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Samba: share home directories when home directories are symbolic links

    - by Owen
    I have set up a new Ubuntu 9.10 system for five users. In the system is a large LVM volume where all the data is to be kept. The main system disk is not for this purpose, so I attempted to move the home directories using usermod -d /var/data/username -m And started creating my shares for these new home locations. But then I thought: hey, Samba has built-in home directory sharing! So I enabled that, and it didn't work. The shares were not published to the network. Only the share for user 'owen' was published; his folder hadn't been moved. So I thought: maybe Samba home sharing only works for default home locations, so how about I move the home directories back to where they were, and then make them symlinks. root@boxenmkiv:/home# ls -l total 4 lrwxrwxrwx 1 brett brett 25 2010-04-03 08:48 brett -> /var/data/brett/ lrwxrwxrwx 1 carly carly 23 2010-04-03 08:48 carly -> /var/data/carly/ lrwxrwxrwx 1 dave dave 21 2010-04-03 08:48 dave -> /var/data/dave/ lrwxrwxrwx 1 kate kate 23 2010-04-03 08:47 kate -> /var/data/kate/ drwxr-xr-x 4 owen owen 4096 2010-04-03 08:44 owen Like so. Still no go. The only users share which is published to the network is 'owen' who as you can see above has not had his home directory moved. I have also added the following to my smb.conf [global] follow symlinks = yes wide symlinks = yes unix extensions = no With no luck. Am I going about doing this the entirely wrong way? Should I just give up and manually create shares for the users? Thanks in advance.

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • SQL SERVER – 3 Challenges for DBA and Smart Solutions

    - by Pinal Dave
    Developer’s life is never easy. DBA’s life is even crazier. DBA’s Life When a developer wakes up in the morning, most of the time have no idea what different challenges they are going to face that day. Of course, most of the developers know the project and roadmap, which they are working on. However, developers have no clue what coding challenges which they are going face for that day. DBA’s life is even crazier. When DBA wakes up in the morning – they often thank that they were not disturbed during the night due to server issues. The very next thing they wish is that they do not want to challenge which they can’t solve for that day. The problems DBA face every single day are mostly unpredictable and they just have to solve them as they come during the day. Though the life of DBA is not always bad. There are always ways and methods how one can overcome various challenges. Let us see three of the challenges and how a DBA can use various tools to overcome them. Challenge #1 Synchronize Data Across Server A Very common challenge DBA receive is that they have to synchronize the data across the servers. If you try to manually write that up, it may take forever to accomplish the task. It is nearly impossible to do the same with the help of the T-SQL. However, thankfully there are tools like dbForge Studio which can save a day and synchronize data across servers. Read my detailed blog post about the same over here: SQL SERVER – Synchronize Data Exclusively with T-SQL. Challenge #2 SQL Report Builder DBA’s are often asked to build reports on the go. It really annoys DBA’s, but hardly people care about it. No matter how busy a DBA is, they are just called upon to build reports on things on very short notice. I personally like to avoid any task which is given to me accidently and personally building report can be boring. I rather spend time with High Availability, disaster recovery, performance tuning rather than building report. I use SQL third party tool when I have to work with SQL Report. Others have extended reporting capabilities. The latter group of products includes the SQL report builder built-in todbForge Studio for SQL Server. I have blogged about this earlier over here: SQL SERVER – SQL Report Builder in dbForge Studio for SQL Server. Challenge #3 Work with the OTHER Database The manager does not understand that MySQL is different from SQL Server and SQL Server is different from Oracle. For them everything is same. In my career hundreds of times I have faced a situation that I am given a database to manage or do some task when their regular DBA is on vacation or leave. When I try to explain I do not understand the underlying the technology, I have been usually told that my manager has trust on me and I can do anything. Honestly, I can’t but I hardly dare to argue. I fall back on the third party tool to manage database when it is not in my comfort zone. For example, I was once given MySQL performance tuning task (at that time I did not know MySQL so well). To simplify search for a problem query let us use MySQL Profiler in dbForge Studio for MySQL. It provides such commands as a Query Profiling Mode and Generate Execution Plan. Here is the blog post discussing about the same: MySQL – Profiler : A Simple and Convenient Tool for Profiling SQL Queries. Well, that’s it! There were many different such occasions when I have been saved by the tool. May be some other day I will write part 2 of this blog post. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL Tagged: Devart, SQL Tool

    Read the article

  • SQL SERVER – What is SSRS and Why SSRS is asked for in many Job Opening?

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. This will be a 5 day blog post in getting started with SSRS. Today will show the importance of SSRS in the business. Why is SSRS asked for in so many job openings? If you talk to an SSRS expert it’s very clear to them exactly why companies really need this invention and how it saves time and adds business value. You don’t have to be an SSRS expert to know its value or to start using it. For example you don’t have to be an airline pilot to know the usefulness of modern transportation. Even the people who don’t know how to run SSRS but need the reports can tell you why that is needed. This blog post will go into why SSRS is an important invention by showing how it improves the usage of information in your company. Before SSRS there has always been a need for a company to benefit from the use of its own information. Excel spreadsheets have been a popular way to do this for a long time. With SSRS you can still use this solution and gain many other options too. A friend of mine told me a story about doing database work in the 90s for a major company and how he wished SSRS was available back then. The Vice President of the marketing channel would often come to him just before an important meeting with the board of directors. He often needed to show how certain product sales were performing over time. All this information was in the database so it was my friend’s job to get the information out and organized into a medium the VP could use. This medium was usually Excel. The VP often had meetings all over the world where he showcased this Excel report. The solution to get the VP to him anywhere he was in the world was an Excel file attached to an e-mail. This worked pretty well but with some drawbacks. One time my friend sent the wrong file in the e-mail. A few minutes later my friend realized his mistake and sent another frantic e-mail to VP. This one was saying to ignore the last e-mail and use this newer one. Would the VP see the correct e-mail in time? If SSRS had been available, my friend could have created a solution that let the VP run the report any time he wished. The report could have been published to the company intranet where the VP could run it from any of the offices he happened to be traveling to that month. There is a fair amount of work up front to develop and publish the report, but once that work is completed, the report can be reused as many times as needed. My friend could even be on vacation for the first day of the monthly and the VP can get his real-time report. Not only could the report show the most recent data, the VP could choose to view reports of previous months with just a few clicks. The deployed SSRS is user friendly, and can also be configured to protect reports from being run by the wrong people. Tomorrow’s Post Tomorrow’s blog post will show how to know if you already have SSRS installed. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

  • SQL – Quick Start with Admin Sections of NuoDB – Manage NuoDB Database

    - by Pinal Dave
    In the yesterday’s blog post we have seen that it is extremely easy to install the NuoDB database on your local machine. Now that the application is properly set up, let us explore NuoDB a bit more and get you familiar with the how it works and what the important areas of the NuoDB are that you should learn. As we have already installed NuoDB, now we will quickly start with two of the important areas in NuoDB: 1) Admin and 2) Explorer. In this blog post I will explore how the Admin Section of the NuoDB Console works.  In the next blog post we will learn how the Explorer Section works. Let us go to the NuoDB Console by typing the following URL in your browser: http://localhost:8080/ It will bring you to the following screen: On this screen you can see a big Start QuickStart button. Click on the button and it will bring you to following screen. On this screen you will find very important information about Domain and Database Settings. It is our habit that we do not read what is written on the screen and keep on clicking on continue without reading. While we are familiar with most wizards, we can often miss the very important message on the screen. Please note the information of Domain Settings and Database Settings from the following screen before clicking on Create Database. Domain Settings User: quickstart Password: quickstart Database Settings User: dba Password: goalie Database: test Schema: HOCKEY Once you click on the Create Database button it will immediately start creating sample database. First, it will start a Storage Manager and right after that it will start a Transaction Engine. Once the engine is up, it will Create a Schema and Sample Data. On the success of the creating the sample database it will show the following screen. Now is the time where we can explore the NuoDB Admin or NuoDB Explorer. If you click on Admin, it will first show following login screen. Enter for the username “domain” and for the password “bird”. Alternatively you can enter “quickstart”  twice for username and password.  It works as too. Once you enter into the Admin Section, on the left side you can see information about NuoDB and Admin Console and on the right side you can see the domain overview area. From this Administrative section you can do any of the following tasks: Create a view of the entire domain Add and remove databases Start and stop NuoDB Transaction Engines and Storage Managers Monitor transaction across all the NuoDB databases On the right side of the Admin Section we can see various information about a particular NuoDB domain. You can quickly view various alerts, find out information about the number of host machines that are provisioned for the domain, and see the number of databases and processes that are running in the domain. If you click on the “1 host” link you will be able to see various processes, CPU usage and other information. In the Processes Section you can see that there are two different types of processes. The first process (where you can see the floppy drive icon) represents a running Storage Manager process and the second process a running Transaction Engine process. You can click on the links for the Storage Manager and Transaction Engine to see further statistical details right down to the last byte of the data. There are various charts available for analysis as well. I think the product is quite mature and the user can add different monitor charts to the Admin section. Additionally, the Admin section is the place where you can create and manage new databases. I hope today’s tutorial gives you enough confidence that you can try out NuoDB and checkout various administrative activities with the database. I am personally impressed with their dashboard related to various counters. For more information about how the NuoDB architecture works and what a Storage Manager or Transaction Engine does, check out this short video with NuoDB CTO Seth Proctor:  In the next blog post, we will try out the Explorer section of NuoDB, which allows us to run SQL queries and write SQL code.  Meanwhile, I strongly suggest you download and install NuoDB and get yourself familiar with the product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • SQLAuthority News – Top 5 Latest Microsoft Certifications of 2013 – Guest Post

    - by Pinal Dave
    With the IT job market getting more and more competent by the day, certifications are a must for anyone who wishes to get a strong foothold in the industry. Microsoft community comes up with regular updates and enhancements in its existing products to keep up with the rapidly evolving requirements of the ICT industry. We bring you a list of five latest Microsoft certifications that you must consider acquiring this year. MCSE: SharePoint Learn all about Windows Server 2012 and Microsoft SharePoint 2013, which brings an advanced set of features to the fore in this latest version. It introduces new capabilities for business intelligence, social media, branding, search, identity management, mobile device among other features. Enjoy a great user experience with sharing and collaboration in community forum, within a pixel-perfect SharePoint website. Data connectivity and business intelligence tools allow users to process and access data, analyze reports, share and collaborate with each other more conveniently. Microsoft Specialist: Microsoft Project 2013 The only project management system that works seamlessly with other applications and cloud solutions of Microsoft, MS Project 2013 offers more than what meets the eye.  It provides for easier management and monitoring of projects so that users can ensure timely delivery while improving the productivity significantly. So keep all your projects on track and collaborate with your team like never before with this enhanced release! This one’s a must for all project managers. MCSE Messaging Another one of Microsoft gems is its messaging environment which has also launched the latest release Microsoft Exchange Server 2013. Messaging administrators can take up this training and validate their expertise in Unified Messaging, Exchange Online, PowerShell and Virtualization strategies, through MCSE Messaging certification in Exchange Server. If you wish to enhance productivity and data security of your organization while being flexible and extremely efficient, this is the right certification for you. MCSE Communication An enterprise can function optimally on the strength of its information flow and communication systems. With Lync Server 2013, you can introduce a whole new world of unified communications which consists of audio/video conferencing, dial-in, Persistent Chat, instant chat, and EDGE services in your organization. Utilize IT to serve and support business objectives by mastering this UC technology with this latest MCSE Communication course on using Microsoft Lync Server 2013. MCSE: SQL Server 2012 BI Platform The decision making process is largely influenced by underlying enterprise information used by the management for business intelligence. Therefore, a robust business intelligence platform that anchors enterprise IT and transform it to operational efficiencies is the need of the hour. SQL Server 2012 BI Platform certification helps professionals implement, manage and maintain a BI database infrastructure effectively. IT professionals with BI skills are highly sought after these days. MCSD: Windows Store Apps A Microsoft Certified Solutions Developer certification in Windows Store Apps validates your potential in designing interactive apps. Learn The Essentials of Developing Windows Store Apps using HTML5 and JavaScript and establish yourself as an ace developer capable of creating fast and fluid Metro style apps for Windows 8 that are accessible on a variety of devices. You can also go ahead and Learn Essentials of Developing Windows Store Apps using C# mode if you’re already familiar and working with C# programming language. Hence the developers are free to choose their own favorite development stream which opens doors for them to get ready for the latest and exciting application development platform called Windows store apps. Software developers with these skills are in great demand in the industry today. In order to continue being competitive in your respective fields, it is imperative that IT personnel update their knowledge on a regular basis. Certifications are a means to achieve this goal. Not considered to be an optional pre-requisite anymore, major IT certifications such as these are now essential to stay afloat in a cut-throat industry where technologies change on a daily basis. This blog is written by Aruneet Anand of Koenig Solutions. Koenig Solutions does training for all of the above courses. For more information, visit the website. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Microsoft Certifications

    Read the article

  • SQL SERVER – How to Compare the Schema of Two Databases with Schema Compare

    - by Pinal Dave
    Earlier I wrote about An Efficiency Tool to Compare and Synchronize SQL Server Databases and it was very much well received. Since the blog post I have received quite a many question that just like data how we can also compare schema and synchronize it. If you think about comparing the schema manually, it is almost impossible to do so. Table Schema has been just one of the concept but if you really want the all the schema of the database (triggers, views, stored procedure and everything else) it is just impossible task. If you are developer or database administrator who works in the production environment than you know that there are so many different occasions when we have to compare schema of the database. Before deploying any changes to the production server, I personally like to make note of the every single schema change and document it so in case of any issue , I can always go back and refer my documentation. As discussed earlier it is absolutely impossible to do this task without the help of third party tools. I personally use Devart Schema Compare for this task. This is an extremely easy tool. Let us see how it works. First I have two different databases – a) AdventureWorks2012 and b) AdventureWorks2012-V1. There are total three changes between these databases. Here is the list of the same. One of the table has additional column One of the table have new index One of the stored procedure is changed Now let see how dbForge Schema Compare works in this scenario. First open dbForge Schema Compare studio. Click on New Schema Comparison. It will bring you to following screen where we have to configure the database needed to configure. I have selected AdventureWorks2012 and AdventureWorks-V1 databases. In the next screen we can verify various options but for this demonstration we will keep it as it is. We will not change anything in schema mapping screen as in our case it is not required but generically if you are comparing across schema you may need this. This is the most important screen as on this screen we select which kind of object we want to compare. You can see the options which are available to select. The screen lets you select the objects from SQL Server 2000 to SQL Server 2012. Once you click on compare in previous screen it will bring you to this screen, which will essentially display the comparative difference between two of the databases which we had selected in earlier screen. As mentioned above there are three different changes in the database and the same has been listed over here. Two of the changes belongs to the tables and one changes belong to the procedure. Let us click each of them one by one to see what is the difference between them. In very first option we can see that there is an additional column in another database which did not exist earlier. In this example we can see that AdventureWorks2012 database have an additional index. Following example is very interesting as in this case, we have changed the definition of the stored procedure and the result pan contains the same. dbForget Schema Compare very effectively identify the changes in schema and lists them neatly to developers. Here is one more screen. This software not only compares the schema but also provides the options to update or drop them as per the choice. I think this is brilliant option. Well, I have been using schema compare for quite a while and have found it very useful. Here are few of the things which dbForge Schema Compare can do for developers and DBAs. Compare and synchronize SQL Server database schemas Compare schemas of live database and SQL Server backup Generate comparison reports in Excel and HTML formats Eliminate mistakes in schema changes propagation across environments Track production database changes and customizations Automate migration of schema changes using command line interface I suggest that you try out dbForge Schema Compare and let me know what you think of this product. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL

    Read the article

  • SQL SERVER – Integrate Your Data with Skyvia – Cloud ETL Solution

    - by Pinal Dave
    In our days data integration often becomes a key aspect of business success. For business analysts it’s very important to get integrated data from various sources, such as relational databases, cloud CRMs, etc. to make correct and successful decisions. There are various data integration solutions on market, and today I will tell about one of them – Skyvia. Skyvia is a cloud data integration service, which allows integrating data in cloud CRMs and different relational databases. It is a completely online solution and does not require anything except for a browser. Skyvia provides powerful etl tools for data import, export, replication, and synchronization for SQL Server and other databases and cloud CRMs. You can use Skyvia data import tools to load data from various sources to SQL Server (and SQL Azure). Skyvia supports such cloud CRMs as Salesforce and Microsoft Dynamics CRM and such databases as MySQL and PostgreSQL. You even can migrate data from SQL Server to SQL Server, or from SQL Server to other databases and cloud CRMs. Additionally Skyvia supports import of CSV files, either uploaded manually or stored on cloud file storage services, such as Dropbox, Box, Google Drive, or FTP servers. When data import is not enough, Skyvia offers bidirectional data synchronization. With this tool, you can synchronize SQL Server data with other databases and cloud CRMs. After performing the first synchronization, Skyvia tracks data changes in the synchronized data storages. In SQL Server databases (and other relational databases) it creates additional tracking tables and triggers. This allows synchronizing only the changed data. Skyvia also maps records by their primary key values to each other, so it does not require different sources to have the same primary key structure. It still can match the corresponding records without having to add any additional columns or changing data structure. The only requirement for synchronization is that primary keys must be autogenerated. With Skyvia it’s not necessary for data to have the same structure in integrated data storages. Skyvia supports powerful mapping mechanisms that allow synchronizing data with completely different structure. It provides support for complex mathematical and string expressions when mapping data, using lookups, etc. You may use data splitting – loading data from a single CSV file or source table to multiple related target tables. Or you may load data from several source CSV files or tables to several related target tables. In each case Skyvia preserves data relations. It builds corresponding relations between the target data automatically. When you often work with cloud CRM data, native CRM data reporting and analysis tools may be not enough for you. And there is a vast set of professional data analysis and reporting tools available for SQL Server. With Skyvia you can quickly copy your cloud CRM data to an SQL Server database and apply corresponding SQL Server tools to the data. In such case you can use Skyvia data replication tools. It allows you to quickly copy cloud CRM data to SQL Server or other databases without customizing any mapping. You need just to specify columns to copy data from. Target database tables will be created automatically. Skyvia offers powerful filtering settings to replicate only the records you need. Skyvia also provides capability to export data from SQL Server (including SQL Azure) and other databases and cloud CRMs to CSV files. These files can be either downloadable manually or loaded to cloud file storages or FTP server. You can use export, for example, to backup SQL Azure data to Dropbox. Any data integration operation can be scheduled for automatic execution. Thus, you can automate your SQL Azure data backup or data synchronization – just configure it once, then schedule it, and benefit from automatic data integration with Skyvia. Currently registration and using Skyvia is completely free, so you can try it yourself and find out whether its data migration and integration tools suits for you. Visit this link to register on Skyvia: https://app.skyvia.com/register Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Cloud Computing

    Read the article

  • Big Data – Beginning Big Data – Day 1 of 21

    - by Pinal Dave
    What is Big Data? I want to learn Big Data. I have no clue where and how to start learning about it. Does Big Data really means data is big? What are the tools and software I need to know to learn Big Data? I often receive questions which I mentioned above. They are good questions and honestly when we search online, it is hard to find authoritative and authentic answers. I have been working with Big Data and NoSQL for a while and I have decided that I will attempt to discuss this subject over here in the blog. In the next 21 days we will understand what is so big about Big Data. Big Data – Big Thing! Big Data is becoming one of the most talked about technology trends nowadays. The real challenge with the big organization is to get maximum out of the data already available and predict what kind of data to collect in the future. How to take the existing data and make it meaningful that it provides us accurate insight in the past data is one of the key discussion points in many of the executive meetings in organizations. With the explosion of the data the challenge has gone to the next level and now a Big Data is becoming the reality in many organizations. Big Data – A Rubik’s Cube I like to compare big data with the Rubik’s cube. I believe they have many similarities. Just like a Rubik’s cube it has many different solutions. Let us visualize a Rubik’s cube solving challenge where there are many experts participating. If you take five Rubik’s cube and mix up the same way and give it to five different expert to solve it. It is quite possible that all the five people will solve the Rubik’s cube in fractions of the seconds but if you pay attention to the same closely, you will notice that even though the final outcome is the same, the route taken to solve the Rubik’s cube is not the same. Every expert will start at a different place and will try to resolve it with different methods. Some will solve one color first and others will solve another color first. Even though they follow the same kind of algorithm to solve the puzzle they will start and end at a different place and their moves will be different at many occasions. It is  nearly impossible to have a exact same route taken by two experts. Big Market and Multiple Solutions Big Data is exactly like a Rubik’s cube – even though the goal of every organization and expert is same to get maximum out of the data, the route and the starting point are different for each organization and expert. As organizations are evaluating and architecting big data solutions they are also learning the ways and opportunities which are related to Big Data. There is not a single solution to big data as well there is not a single vendor which can claim to know all about Big Data. Honestly, Big Data is too big a concept and there are many players – different architectures, different vendors and different technology. What is Next? In this 31 days series we will be exploring many essential topics related to big data. I do not claim that you will be master of the subject after 31 days but I claim that I will be covering following topics in easy to understand language. Architecture of Big Data Big Data a Management and Implementation Different Technologies – Hadoop, Mapreduce Real World Conversations Best Practices Tomorrow In tomorrow’s blog post we will try to answer one of the very essential questions – What is Big Data? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – Installing SQL Server Data Tools and SSRS

    - by Pinal Dave
    This example is from the Beginning SSRS by Kathi Kellenberger. Supporting files are available with a free download from the www.Joes2Pros.com web site. If you have installed SQL Server, but are missing the Data Tools or Reporting Services Double-click the SQL Server 2012 installation media. Click the Installation link on the left to view the Installation options. Click the top link New SQL Server stand-alone installation or add features to an existing installation. Follow the SQL Server Setup wizard until you get to the Installation Type screen. At that screen, select Add features to an existing instance of SQL Server 2012. Click Next to move to the Feature Selection page. Select Reporting Services – Native and SQL Server Data Tools. If the Management Tools have not been installed, go ahead and choose them as well. Continue through the wizard and reboot the computer at the end of the installation if instructed to do so. Configure Reporting Services If you installed Reporting Services during the installation of the SQL Server instance, SSRS will be configured automatically for you. If you install SSRS later, then you will have to go back and configure it as a subsequent step. Click Start > All Programs > Microsoft SQL Server 2012 > Configuration Tools > Reporting Services Configuration Manager > Connect on the Reporting Services Configuration Connection dialog box. On the left-hand side of the Reporting Services Configuration Manager, click Database. Click the Change Database button on the right side of the screen. Select Create a new report server database and click Next. Click through the rest of the wizard accepting the defaults. This wizard creates two databases: ReportServer, used to store report definitions and security, and ReportServerTempDB which is used as scratch space when preparing reports for user requests. Now click Web Service URL on the left-hand side of the Reporting Services Configuration Manager. Click the Apply button to accept the defaults. If the Apply button has been grayed out, move on to the next step. This step sets up the SSRS web service. The web service is the program that runs in the background that communicates between the web page, which you will set up next, and the databases. The final configuration step is to select the Report Manager URL link on the left. Accept the default settings and click Apply. If the Apply button was already grayed out, this means the SSRS was already configured. This step sets up the Report Manager web site where you will publish reports. You may be wondering if you also must install a web server on your computer. SQL Server does not require that the Internet Information Server (IIS), the Microsoft web server, be installed to run Report Manager. Click Exit to dismiss the Reporting Services Configuration Manager dialog box. Tomorrow’s Post Tomorrow’s blog post will show how to create your first report using the Report Wizard. If you want to learn SSRS in easy to simple words – I strongly recommend you to get Beginning SSRS book from Joes 2 Pros. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Reporting Services, SSRS

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >