Search Results

Search found 10572 results on 423 pages for 'learning plan'.

Page 80/423 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Database Management: Metadata is more important than you think!

    Whether it&#146;s data warehousing, MDM or business intelligence, metadata is added to the project plan, is downgraded and eventually dropped from the project plan. The impacts of not including metadata and metadata management as part of the project have far-reaching and costly repercussions throughout the organization. Read on to learn more...

    Read the article

  • Is a startup revolving around a web app still viable?

    - by NRUTA
    I have an idea for a startup that revolves around a web app. I plan to incorporate a mobile app in the future but, unlike something like instagram, users will be more likely to access with a desktop computer. I am learning Ruby On Rails currently but wonder if, starting in the 4th quarter of 2012, I am focusing on a technology (web development) that is becoming obsolete. Should I dump the idea for something that revolves around mobile technologies and spend my time learning something like iOS development instead?

    Read the article

  • SEO - folder or file [closed]

    - by ErmSo
    Possible Duplicate: Should I use a file extension or not? I'm creating a website with a number of pricing options. Each price plan has it's own page and there is also a comparison page. As far as SEO is concerned, which of the following is better? or does it not make a difference? Option one - folders /pricing/plans /pricing/plans/free Option two- files /pricing/plans.php /pricing/free-plan.php

    Read the article

  • Where do you earn more money (Autonomous Systems vs Distributed Systems)? [closed]

    - by Puckl
    I am interested in both topics and I can choose between them for my computer science master. I think the distributed systems master focuses more on software technologies and the autononmous systems master is focused on robotics and machine learning. Do you get good jobs in the fild of machine learning without a Ph.D.? I guess there are more jobs available in the Software-Tech world, is this right? Where do you earn more money? (It is not the only criteria, but it matters)

    Read the article

  • Avoiding Parameter Sniffing in SQL Server

    Parameter sniffing is when SQL Server compiles a stored procedure’s execution plan with the first parameter that has been used and then uses this plan for subsequent executions regardless of the parameters. Get your SQL Server database under version control now!Version control is standard for applications, but databases haven’t caught up. So how can you bring database development up to speed? Why should you start? Find out…

    Read the article

  • Dental SEO - What Exactly Are Discount Dental Plans?

    A dental plan is a type of club you join where we have a portion of providers and consumers. In a discounted plan, the providers have agreed to supply the services at a discounted rate. Being a consumer, one simply has to display his or her membership card while visiting the service provider. These types of discounted plans do not purport to insurance in any way, they're just discount programs.

    Read the article

  • Joins in single-table queries

    - by Rob Farley
    Tables are only metadata. They don’t store data. I’ve written something about this before, but I want to take a viewpoint of this idea around the topic of joins, especially since it’s the topic for T-SQL Tuesday this month. Hosted this time by Sebastian Meine (@sqlity), who has a whole series on joins this month. Good for him – it’s a great topic. In that last post I discussed the fact that we write queries against tables, but that the engine turns it into a plan against indexes. My point wasn’t simply that a table is actually just a Clustered Index (or heap, which I consider just a special type of index), but that data access always happens against indexes – never tables – and we should be thinking about the indexes (specifically the non-clustered ones) when we write our queries. I described the scenario of looking up phone numbers, and how it never really occurs to us that there is a master list of phone numbers, because we think in terms of the useful non-clustered indexes that the phone companies provide us, but anyway – that’s not the point of this post. So a table is metadata. It stores information about the names of columns and their data types. Nullability, default values, constraints, triggers – these are all things that define the table, but the data isn’t stored in the table. The data that a table describes is stored in a heap or clustered index, but it goes further than this. All the useful data is going to live in non-clustered indexes. Remember this. It’s important. Stop thinking about tables, and start thinking about indexes. So let’s think about tables as indexes. This applies even in a world created by someone else, who doesn’t have the best indexes in mind for you. I’m sure you don’t need me to explain Covering Index bit – the fact that if you don’t have sufficient columns “included” in your index, your query plan will either have to do a Lookup, or else it’ll give up using your index and use one that does have everything it needs (even if that means scanning it). If you haven’t seen that before, drop me a line and I’ll run through it with you. Or go and read a post I did a long while ago about the maths involved in that decision. So – what I’m going to tell you is that a Lookup is a join. When I run SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 285; against the AdventureWorks2012 get the following plan: I’m sure you can see the join. Don’t look in the query, it’s not there. But you should be able to see the join in the plan. It’s an Inner Join, implemented by a Nested Loop. It’s pulling data in from the Index Seek, and joining that to the results of a Key Lookup. It clearly is – the QO wouldn’t call it that if it wasn’t really one. It behaves exactly like any other Nested Loop (Inner Join) operator, pulling rows from one side and putting a request in from the other. You wouldn’t have a problem accepting it as a join if the query were slightly different, such as SELECT sod.OrderQty FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail as sod on sod.SalesOrderID = soh.SalesOrderID WHERE soh.SalesPersonID = 285; Amazingly similar, of course. This one is an explicit join, the first example was just as much a join, even thought you didn’t actually ask for one. You need to consider this when you’re thinking about your queries. But it gets more interesting. Consider this query: SELECT SalesOrderID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276 AND CustomerID = 29522; It doesn’t look like there’s a join here either, but look at the plan. That’s not some Lookup in action – that’s a proper Merge Join. The Query Optimizer has worked out that it can get the data it needs by looking in two separate indexes and then doing a Merge Join on the data that it gets. Both indexes used are ordered by the column that’s indexed (one on SalesPersonID, one on CustomerID), and then by the CIX key SalesOrderID. Just like when you seek in the phone book to Farley, the Farleys you have are ordered by FirstName, these seek operations return the data ordered by the next field. This order is SalesOrderID, even though you didn’t explicitly put that column in the index definition. The result is two datasets that are ordered by SalesOrderID, making them very mergeable. Another example is the simple query SELECT CustomerID FROM Sales.SalesOrderHeader WHERE SalesPersonID = 276; This one prefers a Hash Match to a standard lookup even! This isn’t just ordinary index intersection, this is something else again! Just like before, we could imagine it better with two whole tables, but we shouldn’t try to distinguish between joining two tables and joining two indexes. The Query Optimizer can see (using basic maths) that it’s worth doing these particular operations using these two less-than-ideal indexes (because of course, the best indexese would be on both columns – a composite such as (SalesPersonID, CustomerID – and it would have the SalesOrderID column as part of it as the CIX key still). You need to think like this too. Not in terms of excusing single-column indexes like the ones in AdventureWorks2012, but in terms of having a picture about how you’d like your queries to run. If you start to think about what data you need, where it’s coming from, and how it’s going to be used, then you will almost certainly write better queries. …and yes, this would include when you’re dealing with regular joins across multiples, not just against joins within single table queries.

    Read the article

  • Microsoft TechEd 2010 - Day 2 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 2 @ Bangalore Today is the day 2 @ Microsoft TechEd 2010. We had lot of technical sessions as usual there were many tracks going on side by side and I was attending the Web simplified track, Which comprised of the following sessions :   Developing a scalable Media Application using ASP.NET MVC - This was a kind of little advanced stuff. Anyways I couldn't understand much because this was not my piece of cake and I havent worked on this before ASP.Net MVC Unplugged - This was really great because this session covered from the basics of MVC showing what is Model,View and Controller and how it worked and the speaker went into the details of the same. Building RESTful Applications with the Open Data Protocol - There were some concepts explained about this from the basics on how to build RESTful Services and it went on till some advanced configurations of the same. Developing Scalable Web Applications with AppFabric Caching - This session showed about the integration of AppFabric with the .Net Web Applications. Instead of using Inproc Sessions, we can use this AppFabric as a substitute for Caching and outofProc Session Storage without writing code and doing a little bit of configurations which brings in High Scalability, performance to our applications. (But unfortunately there were no demos for this session ) Deep Dive : WCF RIA Services - This session was also an interactive one, in this the speaker presented from the basics of WCF and took a Book Store Application as a sample and explained all details concepts on linking with RIA Services   Apart from these sessions, in between there happened some small events in the breaks like Some discussions about Technology, Innovations Music Jokes Mimicry, etc. And on doing all these things, the developers were given some kool gifts / goodies like USBs, T-Shirts, etc. And today I got a chance to do the following certification : (70-562) Microsoft Certified Technology Specialist in .NET 3.5 Web Applications Since I already have an MCTS in .NET 2.0, I wanted to do an MCPD and for doing the same I was required to do an update to my MCTS with the .NET 3.5 framework and I did the same I cleared it and now am an MCTS in .NET 3.5 Web Apps And on doing this I got a T-Shirt and they gave something called Learning $ of worth 30$. And in various stalls for attending each quiz or some game or some referrals we got some Learning $ which we can redeem later based on our Total Learning $. I got 105 $ which i was able to redeem and got a Microsoft Learning BagPack, 1 free Microsoft certification offer, a laptop light and an e-learning content activated. And after all these sessions and small events, we had something called Demo Extravaganza like I mentioned yesterday. This was a great funfilled event with lot of goodies for the attendees. There were some lucky draw which enabled 2 attendees to get Netbooks (Sponsored by Intel) and 1 attendee to get X-box (Sponsored by Citrix). After Choosing the raffle in the lucky draw they kept it on a device called Microsoft Surface which is a kind of big touch screen device and on putting the raffle on that it detected the code of the attendee and said intelligently how many sessions that person has attended and if he has attended more than 5 he got a Netbook and this was coded by a guy called Imran. Apart from they showed demos on : Research by 2 Tamilnadu students from Krishna Arts and Science college, taken 1200 photographs of their college from different angles and put that up in Bing maps using silverlight and linked with Photosynth, which showed a 3d view of their college based on the photos they uploaded Reasearch by Microsoft on Panaramic HD views of the images. One young guy from Microsoft Research showed a demo of this on Srivilliputhur Andal Temple, in Tamil Nadu and its history with a panoramic view of the temple and the near by places with narration of the historical information on the same and with the videos embedded in it with high definition images which we can zoom to a very detailed level. Some Demo on a business app with Silverlight, Business Intelligence (BI) and maps integrated. It showed the sales of a particular product across locations. Some kool demos by 2 geeks who used Robots to show their development talents. 2 Robots fought with each other 2 Robots danced in sync for the A.R. Rehman song Humma Humma... A dream home project by Raman. He is currently using the same in his home too. Robots are controlling his home currently. They showed a video on this. Here are the list of activities that Robot does for him When he reads a book, robot automatically scans that and shows that image of that person in the screen (TV or comp) in front of him. It shows a wikipedia about that person. It says that person is not in linked in. do you want to add him If he sees an IPL Match news in the book and smiles it understands he is interested in that and opens a website related to that and shows the current game and the scorecard. It cooks for him It cleans the room for him whenever he leaves the house when he is doing something if some intruder comes inside his house his computer automatically switches his screen showing the video of the person coming inside. When he wakes up it automatically opens up the system, loads his mails and the news by the side, etc. Some Demos on Microsoft Pivot. This was there in livelabs but it is now available in getpivot.com its a pivoting of the pictorial data based on some categories and filters on the searches that we do. And finally on filling up some feedback forms we got T-Shirts and Microsoft Visual Studio 2010 Training Kit CDs. Whats more on TechEd??? Stay tuned!!! Will update you soon on the other happenings!! PS : I typed a lot of content for more than a hour but I pressed a backspace and it went to the previous page and all my content were lost and I was not able to retrieve the same and I typed everything again.

    Read the article

  • error LNK2019 for ZLib sample compare.

    - by Nano HE
    Hello. I created win32 console application in vs2010 (without select the option of precompiled header). And I inserted the code below. but *.obj link failed. Could you provide me more information about the error. I searched MSDN, but still can't understand it. #include <stdio.h> #include "zlib.h" // Demonstration of zlib utility functions unsigned long file_size(char *filename) { FILE *pFile = fopen(filename, "rb"); fseek (pFile, 0, SEEK_END); unsigned long size = ftell(pFile); fclose (pFile); return size; } int decompress_one_file(char *infilename, char *outfilename) { gzFile infile = gzopen(infilename, "rb"); FILE *outfile = fopen(outfilename, "wb"); if (!infile || !outfile) return -1; char buffer[128]; int num_read = 0; while ((num_read = gzread(infile, buffer, sizeof(buffer))) > 0) { fwrite(buffer, 1, num_read, outfile); } gzclose(infile); fclose(outfile); } int compress_one_file(char *infilename, char *outfilename) { FILE *infile = fopen(infilename, "rb"); gzFile outfile = gzopen(outfilename, "wb"); if (!infile || !outfile) return -1; char inbuffer[128]; int num_read = 0; unsigned long total_read = 0, total_wrote = 0; while ((num_read = fread(inbuffer, 1, sizeof(inbuffer), infile)) > 0) { total_read += num_read; gzwrite(outfile, inbuffer, num_read); } fclose(infile); gzclose(outfile); printf("Read %ld bytes, Wrote %ld bytes, Compression factor %4.2f%%\n", total_read, file_size(outfilename), (1.0-file_size(outfilename)*1.0/total_read)*100.0); } int main(int argc, char **argv) { compress_one_file(argv[1],argv[2]); decompress_one_file(argv[2],argv[3]);} Output: 1>------ Build started: Project: zlibApp, Configuration: Debug Win32 ------ 1> zlibApp.cpp 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(15): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. 1> c:\program files\microsoft visual studio 10.0\vc\include\stdio.h(234) : see declaration of 'fopen' 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(25): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. 1> c:\program files\microsoft visual studio 10.0\vc\include\stdio.h(234) : see declaration of 'fopen' 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(40): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. 1> c:\program files\microsoft visual studio 10.0\vc\include\stdio.h(234) : see declaration of 'fopen' 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(36): warning C4715: 'decompress_one_file' : not all control paths return a value 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(57): warning C4715: 'compress_one_file' : not all control paths return a value 1>zlibApp.obj : error LNK2019: unresolved external symbol _gzclose referenced in function "int __cdecl decompress_one_file(char *,char *)" (?decompress_one_file@@YAHPAD0@Z) 1>zlibApp.obj : error LNK2019: unresolved external symbol _gzread referenced in function "int __cdecl decompress_one_file(char *,char *)" (?decompress_one_file@@YAHPAD0@Z) 1>zlibApp.obj : error LNK2019: unresolved external symbol _gzopen referenced in function "int __cdecl decompress_one_file(char *,char *)" (?decompress_one_file@@YAHPAD0@Z) 1>zlibApp.obj : error LNK2019: unresolved external symbol _gzwrite referenced in function "int __cdecl compress_one_file(char *,char *)" (?compress_one_file@@YAHPAD0@Z) 1>D:\learning\cpp\cppVS2010\zlibApp\Debug\zlibApp.exe : fatal error LNK1120: 4 unresolved externals ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • IIS7 VPS Hosting Server Configuration

    - by Craig
    I have just registered a Windows 2008 VPS hosting account which I plan on running a couple of web sites on. In the past I have just used shared hosting in which everything is set up for me, so need a few pointers. When I register a new domain it asks for a couple of name servers. How do I set this up on my server? Can I just give it the IP address of the VPS server? Do I have to register some web sites with ns1, ns2 host headers in IIS? It's all a bit confusing when never done before. I have two web sites I plan on hosting. I configured both in IIS with a different IP address (the VPS plan has 2) but when trying to access the site via IP it always displays the default web site. If I turn off the default web site it just 404's. Is there any simple tutorials for setting up a couple of sites from scratch.

    Read the article

  • error LNK2019 for ZLib sample code compiling.

    - by Nano HE
    Hello. I created win32 console application in vs2010 (without select the option of precompiled header). And I inserted the code below. but *.obj link failed. Could you provide me more information about the error. I searched MSDN, but still can't understand it. #include <stdio.h> #include "zlib.h" // Demonstration of zlib utility functions unsigned long file_size(char *filename) { FILE *pFile = fopen(filename, "rb"); fseek (pFile, 0, SEEK_END); unsigned long size = ftell(pFile); fclose (pFile); return size; } int decompress_one_file(char *infilename, char *outfilename) { gzFile infile = gzopen(infilename, "rb"); FILE *outfile = fopen(outfilename, "wb"); if (!infile || !outfile) return -1; char buffer[128]; int num_read = 0; while ((num_read = gzread(infile, buffer, sizeof(buffer))) > 0) { fwrite(buffer, 1, num_read, outfile); } gzclose(infile); fclose(outfile); } int compress_one_file(char *infilename, char *outfilename) { FILE *infile = fopen(infilename, "rb"); gzFile outfile = gzopen(outfilename, "wb"); if (!infile || !outfile) return -1; char inbuffer[128]; int num_read = 0; unsigned long total_read = 0, total_wrote = 0; while ((num_read = fread(inbuffer, 1, sizeof(inbuffer), infile)) > 0) { total_read += num_read; gzwrite(outfile, inbuffer, num_read); } fclose(infile); gzclose(outfile); printf("Read %ld bytes, Wrote %ld bytes, Compression factor %4.2f%%\n", total_read, file_size(outfilename), (1.0-file_size(outfilename)*1.0/total_read)*100.0); } int main(int argc, char **argv) { compress_one_file(argv[1],argv[2]); decompress_one_file(argv[2],argv[3]);} Output: 1>------ Build started: Project: zlibApp, Configuration: Debug Win32 ------ 1> zlibApp.cpp 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(15): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. 1> c:\program files\microsoft visual studio 10.0\vc\include\stdio.h(234) : see declaration of 'fopen' 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(25): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. 1> c:\program files\microsoft visual studio 10.0\vc\include\stdio.h(234) : see declaration of 'fopen' 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(40): warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details. 1> c:\program files\microsoft visual studio 10.0\vc\include\stdio.h(234) : see declaration of 'fopen' 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(36): warning C4715: 'decompress_one_file' : not all control paths return a value 1>d:\learning\cpp\cppvs2010\zlibapp\zlibapp\zlibapp.cpp(57): warning C4715: 'compress_one_file' : not all control paths return a value 1>zlibApp.obj : error LNK2019: unresolved external symbol _gzclose referenced in function "int __cdecl decompress_one_file(char *,char *)" (?decompress_one_file@@YAHPAD0@Z) 1>zlibApp.obj : error LNK2019: unresolved external symbol _gzread referenced in function "int __cdecl decompress_one_file(char *,char *)" (?decompress_one_file@@YAHPAD0@Z) 1>zlibApp.obj : error LNK2019: unresolved external symbol _gzopen referenced in function "int __cdecl decompress_one_file(char *,char *)" (?decompress_one_file@@YAHPAD0@Z) 1>zlibApp.obj : error LNK2019: unresolved external symbol _gzwrite referenced in function "int __cdecl compress_one_file(char *,char *)" (?compress_one_file@@YAHPAD0@Z) 1>D:\learning\cpp\cppVS2010\zlibApp\Debug\zlibApp.exe : fatal error LNK1120: 4 unresolved externals ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

    Read the article

  • Watch Flash video but block out part of it

    - by Ken
    I'm learning a foreign language. There are several online sites that have Flash video in the language I'm learning (e.g., Hulu), which is great. Unfortunately, they have English subtitles, which is (depending on who you ask) somewhere between "annoying" and "actively harmful to my learning". When I'm watching them in a window, I can just move them near the bottom of the screen, or put another window over them. That's awkward, but it kind of works. But I'd like to watch them full-screen, on my TV set. Full-screen Flash doesn't let me put any window on top of it. Is there a way to watch an online Flash video but block out part of it? I'll probably be using my Mac for this, but maybe Linux -- solutions for either one welcome.

    Read the article

  • Could a computer act (dependably) as a wireless router for 200+ clients? [closed]

    - by awkwardusername
    That is, I have a Core 2 Duo E7500 at 2.93GHz, with 2GB memory. I plan to install either Windows Server 2012 or Zeroshell 2.0RC1, and it also (planning to) includes two PCIe Wireless Card Adapters. It also has one ethernet port, and I will connect that to another machine which will be a Database and a Web Server. My plan is to have a corporate level wireless intranet with 200+ clients. I cannot afford to buy routers because I want to operate at zero costs as possible, utilizing my available resources. Is that plan plausible? Also, what minimum specs should my wireless card have? @SvenW: Oh, I meant corporate on the deployment level. I am still an undergraduate and this is more of an educational and expiremental work than an actual project. I got Windows Server 2012 for free though, and this isn't actually for commercial use.

    Read the article

  • iPhone 4S Post Paid Rental Plans From Airtel & Aircel [India]

    - by Gopinath
    Apple iPhone 4S is available from Airtel and Aircel cellular operators with mind blowing price tags close to Rs. 50,000/-. If you are a fan boy and ready to buy iPhone 4S here are the details of monthly tariffs offered by Airtel & Aircel. Airtel iPhone 4S Post Paid Plans Airtel has a range of post plans for iPhone 4S lovers. Irrespective of the model of iPhone 4S you are planning to buy they offer post paid plans starting from Rs. 300 per month(after 50% discount on original rental of Rs.600 ) with 200 MB free 3G data to Rs. 1000 with 3072 MB free 3G data. The following table runs down complete details of various plans in offer. For pre-paid iPhone 4S tariffs please check this iPhone 4S Airtel website Aircel iPhone 4S Post Paid Plans Aircel has an unique plan for it’s iPhone 4S customers depending on the model they are willing to buy. For some reason the post paid plans are closely tied with the model of the phone and I believe this is not the right thing for its customers. The plan for 16 GB model costs Rs. 900 for 32 GB model that monthly plan costs Rs. 1150.  Like Airtel these monthly rentals are after 50% discount. This article titled,iPhone 4S Post Paid Rental Plans From Airtel & Aircel [India], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • printable PHP manual - 'all but the Function Reference section'

    - by JW01
    My Motivation I find it easier to learn things by reading 'offline'. I'd like to lean back and read the narrative part of a paper version of the official php manual. My Scuppered Plan My plan was to download the manual, print all but the Function Reference section and then read it. I have downloaded the "Single HTML file" version of the manual from the php.net download page. (That version did not contain any images, so I patched-in the ones from the Many HTML files version with no problem.) My plan was to open that "Single HTML file" in an HTML editor, delete the Function Reference section then print it out. Unfortunately, although I have tried three different editors, I have not been able to successfully load-up that massive html file to be able to edit it. Its about (~40MB). I started to look into the phpdoc framework with a view to rendering my own html docs from the source...but that's a steep learning curve for a newby..and is a last resort. I would use a file splitter, but they tend to split files crudely with no regard for html/xml/xhtml sematics. So the question is... Does anyone know know where you can download the php manual in a version that is a kind of half-way house between the 'Single HTML file' and the 'Many HTML files'? Ideally with the docs split into 3 parts: File 1 - stuff before the function reference File 2 - function reference File 3 - stuff after the function reference Or Can you suggest any editors/tools will enable me to split up this file myself?

    Read the article

  • What needs to be done to port a Windows Flash game to Android?

    - by Russell Yan
    My team has developed a game using Flash (with Lua embeded) that targets Windows. And we want to port that game to Android (also iOS in the future). It's acceptable to rewrite the Flash part of client. And some other team in our company recommended Unity3D to replace the Flash part. Since we have no experience in Android/iOS development, we would need to learn some new tool/language anyway. So we would like that learning still be useful after the porting and when we starting a new game. Any thoughts? Edits: I think it is worth noticing the background of the game : the project is started by a tester and developer, both without knowing much about flex and actionscript. They built the game while learning, so most of the code is hard to maintain. I and other two developers joined the project after a year or so when one has leave (and the other be our manager). The other two developers are just graduates with little experience and little knowledge of flex. I am good at the server part and the c# language. Based on the fact that the code is hard to maintain (and we do need to change a lot of code to make the game easier to playe in a mobile device), and I am good at c# (and learning). I still tend to do the porting with Unity, which could get better performance and possibly save time.

    Read the article

  • Why are two indicator-network versions being worked on?

    - by Daniel Rodrigues
    Some months ago, on the road to Ubuntu Maverick, a new system indicator, network (with connman as a backend), started to be developed. The plan was to get it into UNE and release it with no notifcation area. Unfortunately it didn't make it into the final version. However, continued efforts are still being made to improve it, and I'm getting regular updates. From a blueprint from the last UDS, I read that the plan was to ship no notification area and only indicators. For that, it was defined that nm-applet (backend: NetworkManager) should be ported to the appindicator library. Today I discovered that those efforts are going on and a initial version is available for testing, available from Matt Trudel PPA (Natty only). So, my questions is, to whoever has the necessary info: wouldn't it be easier to join efforts and concentrate the work in just one version (probably NetworkManager backend, as that's the official plan), instead of breaking those efforts apart and hampering both testing and developing? Both indicators are being developed by Canonical engineers, and that really doesn't make much sense. So, any Canonical engineer willing to clarify this?

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • Closed-loop Recommendation Engines: Analyst Insight report on Oracle Real-Time Decisions (RTD)

    - by Mike.Hallett(at)Oracle-BI&EPM
    In November 2011, Helena Schwenk of MWD Advisors, published her analysis on Oracle Real-Time Decisions.  She summarizes as follows: "In contrast to other popular approaches to implementing predictive analytics, RTD focuses on learning from each interaction and using these insights to adjust what is presented, offered or displayed to a customer. Likewise its capabilities for optimising decisions within the context of specific business goals and a report-driven framework for assessing the performance of models and decisions make it a strong contender for organisations that want to continuously improve decision making as part of a customer experience marketing, e-commerce optimisation and operational process efficiency initiative." This is an outstanding report to share with a prospect or client as it goes into great detail about the product and its capabilities.  It also highlights the differences in Oracle's Real-Time Decisions product vs. other closed loop recommendation engines. I encourage you to share this report with your clients and prospects. It can be downloaded directly from here - MWD Advisors Vendor Profile: Oracle Real-Time Decisions. (expires in November 2012) Highlights: "At the core of RTD lies a learning engine that combines business rules and adaptive predictive models to deliver recommendations to operational systems while simultaneously learning from experiences." "While closed-loop recommendation engines are becoming more prevalent... there are a number of features that distinguish RTD: It makes its decisions in the context of the business objectives, such as maximising customer revenue or reducing service costs Its support for operational integration offers organisations some flexibility in how they implement the offering."

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Newbie seeking advice on programming in general

    - by user974685
    need some of you to remember back to a time when you might have been bad at programming... Been at my new job (as a software developer) for a couple of months now, passed probation period. Have very little programming experience (C++ only) and am currently working with asp.net MVC and silverlight. So there's a website the company has been working on and I am joining the effort to make it better, iron out bugs etc. The problem is - learning about a system/website which has already been made, via visual studio. I ALWAYS feel HUGELY overwhelmed, never knowing which part of this line should I look up, and generally having lots of trouble getting the big picture. Visual studio itself is something I'm finding it difficult to get to grips with, let alone the asp.net framework. I get the impression that because my coworkers have more experience than me, they are getting all the good jobs, and I am left with crap to do - stuff which is not even vaguely programming. Meaning they are learning/creating more, and I am learning/creating near nothing. I'm getting demoralised, and too scared to say anything. I'm not stupid, I've read and practiced plenty of the fundamental programming concepts...I'm just bloody scared of this damn framework. I look at it and just feel paralyzed. The result is that I keep asking the older veteran guy of questions, and he is getting irritated, and would rather give me easy/mindless/non programming jobs to avoid wasting time with helping me out. Then when I don't understand something, I'm hesitating about whether or not I should ask him yet, and trying to decide if it would be a waste of time. I'm the kind of person who picks things up slowly, but with a lot of attention to detail. The former I think is making me look incompetent though. Anyone get where I'm coming from please say something helpful....I'm scared of losing my job in a few months or something...

    Read the article

  • Free Team Foundation Service a Boon for Play Projects

    - by Ken Cox [MVP]
    My ‘jump in and sink or swim’ learning style leads me to create dozens of unbillable ‘play’ projects in Visual Studio 2012.  For example, I’m currently learning to customize the powerful auction/fixed price/classifieds software called AuctionWorx Enterprise.  I make stupid mistakes as I grasp a new API and configure projects. It’s frustrating to go down a rat hole and discover the VS IDE’s Undo doesn’t reach back to a working build from the previous weekend. Enter Visual Studio’s Free Team Foundation Service.  Put your play projects into the free source control and check in (or shelve) a snapshot before embarking on something risky. (I already use Discount ASP.NET’s Team Foundation Server hosting for client projects so there’s zero TFS/VS learning curve for me and first class GUI support.) TFS is free right now for teams of five or fewer. I’m not naive enough to expect ‘free’ to last forever. So, it’ll be interesting to see how much Microsoft intends to charge in 2013 for TFS. In the meantime, why not grab some free source code storage? BTW, it’s weird to realize that Microsoft is backing up my junky little playtime code on three physically-distinct servers every day - and taking incremental backups every hour.

    Read the article

  • game multiplayer service development

    - by nomad
    I'm currently working on a multiplayer game. I've looked at a number of multiplayer services(player.io, playphone, gamespy, and others) but nothing really hits the mark. They are missing features, lack platform support or cost too much. What I'm looking for is a simple poor man's version of steam or xbox live. Not the game marketplace side of those two but the multiplayer services. User accounts, profiles, presence info, friends, game stats, invites, on/offline messaging. Basically I'm looking for a unified multiplayer platform for all my games across devices. Since I can't find what I'm planning to roll my own piece by piece. I plan to save on server resources by making most of the communication p2p. Things like game data and voice chat can be handled between peers and the server keeps track of user presence and only send updates when needed or requested. I know this runs the risk of cheating but that isn't a concern right now. I plan to run this on a Amazon ec2 micro server for development then move to a small to large instance when finished. I figure user accounts would be the simplest to start with. Users can create accounts online or using in game dialog, login/out, change profile info. The user can access this info online or in game. I will need user authentication and secure communication between server and client. I figure all info will be stored in a database but I dont know how it can be stored securely and accessed from webserver and game services. I would appreciate and links to tutorials, info or advice anyone could provide to get me started. Any programming language is fine but I plan to use c# on the server and c/c++ on devices. I would like to get started right away but I'm in no hurry to get it finished just yet. If you know of a service that already fits my requirements please let me know.

    Read the article

  • SQL SERVER – Puzzle to Win Print Book and Free 30 Days Online Training Material

    - by pinaldave
    Yesterday I had asked a simple question SQL SERVER – Puzzle to Win Print Book – Write T-SQL Self Join Without Using LEAD and LAG with keeping two simple intention. We can all learn about new feature of SQL Server 2012 We can learn new feature of SQL Server 2012 while practicing on earlier version of SQL Server. While I was creating question due to copy-paste error the question was not correctly created. In simple word – I made a mistake. This created some confusion and I feel bad about this. Here is what we will do. Please read the question again and attempt to answer the question which I have asked in the blog post. Yesterday the give away was my SQL Server Interview Questions and Answers book. As the question was corrected after a while, the give away are now got sweeter. SQL Server Interview Questions and Answers book – 2 Copy 30 Days Online Training Material of Pluralsight. They have excellent learning resources – I have written my 6 hour learning experience over Learning SSAS (SQL Server Analysis Services) Online in 6 Hours. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >