Search Results

Search found 271 results on 11 pages for 'tabular'.

Page 6/11 | < Previous Page | 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • Converting #MDX to #DAX and PowerPivot Workshop online #ppws

    - by Marco Russo (SQLBI)
    I just published the article Converting MDX to DAX – First Steps on the renewed SQLBI web site about converting MDX to DAX. The reason is that with BISM Tabular in Analysis Services 2012 you will be able to write queries in both DAX and MDX. If you already know MDX, you might wonder how to “translate” your MDX knowledge in DAX. I think that this is another way you can improve your knowledge about DAX: it has different concepts behind and this comparison should be helpful in this purpose. This is...(read more)

    Read the article

  • MySQL Multi-Aggregated Rows in Crosstab Queries

    MySQL's crosstabs contain aggregate functions on two or more fields, presented in a tabular format. In a multi-aggregate crosstab query, two different functions can be applied to the same field or the same function can be applied to multiple fields on the same (row or column) axis. Rob Gravelle shows you how to apply two different functions to the same field in order to create grouping levels in the row axis.

    Read the article

  • MySQL Multi-Aggregated Rows in Crosstab Queries

    MySQL's crosstabs contain aggregate functions on two or more fields, presented in a tabular format. In a multi-aggregate crosstab query, two different functions can be applied to the same field or the same function can be applied to multiple fields on the same (row or column) axis. Rob Gravelle shows you how to apply two different functions to the same field in order to create grouping levels in the row axis.

    Read the article

  • New Whitepaper from SQLBI: Vertipaq vs ColumnStore

    - by AlbertoFerrari
    At the end of June 2012, I was in Amsterdam to present some sessions at Teched Europe 2012 and, while preparing the material for the demos (yes, the best demos are the ones I prepare at the last minute), I decided to make a comparison between the two implementations of xVelocity of SQL 2012, one is the VertiPaq engine in SSAS Tabular and the other one is the ColumnStore index in SQL Server. After some trials, I decided that ColumnStore was a clear loser, because I was not able to see a real improvement...(read more)

    Read the article

  • Report Builder 3.0: Formatting the Elements in your Report

    here is a lot that can be done to make basic tabular reports more readable, using Microsoft's free Report Builder. Rob Sheldon continues his exploration of the power of this tool by showing how to format various elements within reports. Learn Agile Database Development Best PracticesAgile database development experts Sebastian Meine and Dennis Lloyd are running day-long classes designed to complement Red Gate’s SQL in the City US tour. Classes will be held in San Francisco, Chicago, Boston and Seattle. Register Now.

    Read the article

  • Report Builder 3.0: Formatting the Elements in your Report

    There is a lot that can be done to make basic tabular reports more readable, using Microsoft's free Report Builder. Rob Sheldon continues his exploration of the power of this tool by showing how to format various elements within reports Is your SQL Database under Version Control?SSMS plug-in SQL Source Control connects SVN, TFS, Git, Hg and all others to SQL Server. Learn more.

    Read the article

  • DAX Statistical Functions

    Following on from his first four articles on using Data Analysis Expressions (DAX) with tabular databases, Robert Sheldon dives into some of the DAX statistical functions available, demonstrating which are the most useful and examples of how they work. The seven tools in the SQL DBA Bundle support your core SQL Server database administration tasks.Make backups a breeze! Enjoy trouble-free troubleshooting! Make the most of monitoring! Download a free trial now.

    Read the article

  • DAX Query Basics

    In this document I will attempt to talk you through writing your first very simple DAX queries. For the purpose of this document I will query the rather familiar Adventure Works Tabular Cube. Are you sure you can restore your backups? Run full restore + DBCC CHECKDB quickly and easily with SQL Backup Pro's new automated verification. Check for corruption and prepare for when disaster strikes. Try it now.

    Read the article

  • Adaptec 6405 RAID controller turned on red LED

    - by nn4l
    I have a server with an Adaptec 6405 RAID controller and 4 disks in a RAID 5 configuration. Staff in the data center called me because they noticed a red LED was turned on in one of the drive bays. I have then checked the status using 'arcconf getconfig 1' and I got the status message 'Logical devices/Failed/Degraded: 2/0/1'. The status of the logical devices was listed as 'Rebuilding'. However, I did not get any suspicious status of the affected physical device, the S.M.A.R.T. setting was 'no', the S.M.A.R.T. warnings were '0' and also 'arcconf getsmartstatus 1' returned no problems with any of the disk drives. The 'arcconf getlogs 1 events tabular' command gives lots of output (sorry, can't paste the log file here as I only have remote console access, I could post a screenshot though). Here are some sample entries: eventtype FSA_EM_EXPANDED_EVENT grouptype FSA_EXE_SCSI_GROUP subtype FSA_EXE_SCSI_SENSE_DATA subtypecode 12 cdb 28 00 17 c4 74 00 00 02 00 00 00 00 data 70 00 06 00 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 0 The 'arcconf getlogs 1 device tabular' command reports mediumErrors 1 for two of the disks. Today, I have checked the status of the controller again. Everything is back to normal, the controller status is now 'Logical devices/Failed/Degraded: 2/0/0', the logical devices are also all back to 'Optimal'. I was not able to check the LED status, my guess is that the red LED is off again. Now I have a lot of questions: what is a possible cause for the medium error, why it is not reported by the SMART log too? Should I replace the disk drives? They were purchased just a month ago. The rebuilding process took one or two days, is that normal? The disks are 2 TByte each and the storage system is mostly idling. the timestamp of the logs seem to show the moment of the log retrieval, not the moment of the incident. Please advise, all help is very appreciated.

    Read the article

  • How does QuickBooks handle IIF imports?

    - by dwwilson66
    I've received a 'template' for an IIF file for Quickbooks transactions, and there's like seventy-bazillion fields in there, lots of which I never even user. It's a tab delimited file, with the following lines--field headers for transactions and respective splits for those transactions, followed by an end-of-transaction marker. !TRNS FIELD1 FIELD2 FIELD3 ... FIELD48 !SPL FIELD1 FIELD2 FIELD3 ... FIELD48 !ENDTRNS TRNS FIELD1_DATA FIELD2_DATA FIELD3_DATA ... FIELD48_DATA SPL FIELD1_DATA FIELD2_DATA FIELD3_DATA ... FIELD48_DATA ENDTRNS ... What drives data to a particular field? Is it the field header with corresponding data, or is it the tabular position relative to the head of the line? E.G., Let's say all I have to import is the data in FIELD1, FIELD3 and FIELD5: Would I need by header: !TRNS FIELD1 FIELD3 FIELD5 !SPL FIELD1 FIELD3 FIELD5 !ENDTRNS TRNS FIELD1 FIELD3 FIELD5 SPL FIELD1 FIELD3 FIELD5 ENDTRNS or by tabular position: !TRNS FIELD1 FIELD2 FIELD3 FIELD4 FIELD5 !SPL FIELD1 FIELD2 FIELD3 FIELD4 FIELD5 !ENDTRNS TRNS FIELD1_DATA FIELD2_BLANK FIELD3_DATA FIELD4_BLANK FIELD5_DATA SPL FIELD1_DATA FIELD2_BLANK FIELD3_DATA FIELD4_BLANK FIELD5_DATA ENDTRNS Alternately, if it were a comma delimited input, would I need: DATA1,DATA3,DATA5 or DATA1,,DATA3,,DATA5 Anyone have any experience with what Quickbooks is doing?

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • iteration in latex

    - by Tim
    Hi, I would like to use some iteration control flow to simplify the following latex code \begin{sidewaystable} \caption{A glance of images} \centering \begin{tabular}{| c ||c| c| c |c| c|| c |c| c|c|c| } \hline \backslashbox{Theme}{Class} &\multicolumn{5}{|c|}{Class 0} & \multicolumn{5}{|c|}{Class 1} \\ \hline \hline 1 & \includegraphics[scale=2]{../../results/1/0_1.eps} &\includegraphics[scale=2]{../../results/1/0_2.eps} &\includegraphics[scale=2]{../../results/1/0_3.eps} &\includegraphics[scale=2]{../../results/1/0_4.eps} &\includegraphics[scale=2]{../../results/1/0_5.eps} &\includegraphics[scale=2]{../../results/1/1_1.eps} &\includegraphics[scale=2]{../../results/1/1_2.eps} &\includegraphics[scale=2]{../../results/1/1_3.eps} &\includegraphics[scale=2]{../../results/1/1_4.eps} &\includegraphics[scale=2]{../../results/1/1_5.eps} \\ \hline \hline 2 & \includegraphics[scale=2]{../../results/2/0_1.eps} &\includegraphics[scale=2]{../../results/2/0_2.eps} &\includegraphics[scale=2]{../../results/2/0_3.eps} &\includegraphics[scale=2]{../../results/2/0_4.eps} &\includegraphics[scale=2]{../../results/2/0_5.eps} &\includegraphics[scale=2]{../../results/2/1_1.eps} &\includegraphics[scale=2]{../../results/2/1_2.eps} &\includegraphics[scale=2]{../../results/2/1_3.eps} &\includegraphics[scale=2]{../../results/2/1_4.eps} &\includegraphics[scale=2]{../../results/2/1_5.eps} \\ \hline ... % similarly for 3, 4, ..., 22 \hline 23 & \includegraphics[scale=2]{../../results/23/0_1.eps} &\includegraphics[scale=2]{../../results/23/0_2.eps} &\includegraphics[scale=2]{../../results/23/0_3.eps} &\includegraphics[scale=2]{../../results/23/0_4.eps} &\includegraphics[scale=2]{../../results/23/0_5.eps} &\includegraphics[scale=2]{../../results/23/1_1.eps} &\includegraphics[scale=2]{../../results/23/1_2.eps} &\includegraphics[scale=2]{../../results/23/1_3.eps} &\includegraphics[scale=2]{../../results/23/1_4.eps} &\includegraphics[scale=2]{../../results/23/1_5.eps} \\ \hline \end{tabular} \end{sidewaystable} I learn that the forloop package provides the for loop. But I am not sure how to apply it to my case? Or other methods not by forloop? Thanks and regards! Update: If I also want to simply another similar case, where the only difference is that the directory does not run from 1, 2, to 23, but in some arbitrary order such as 3, 2, 6, 9,..., or even a list of strings such as dira, dirc, dird, dirb,.... How to make the latex code into loops then? Thanks!

    Read the article

  • Can not parse table information from html document.

    - by Harikrishna
    I am parsing many html documents.I am using html agility pack And I want to parse the tabular information from each document. And there may be any number of tables in each document.But I want to extract only one table from each document which has column header name NAME,PHONE NO,ADDRESS.And this table can be anywhere in the document,like in the document there is ten tables and from ten table there is one table which has many nested tables and from nested table there may be a table what I want to extract means table can be anywhere in the document and I want to find that table from the document by column header name.If I got that table then I want to then extract the information from that table. Now I can find the table which has column header NAME,PHONE NO,ADDRESS and also can extract the information from that.I am doing for that is, first I find the all tables in a document by foreach (var table in doc.DocumentNode.Descendants("table")) then for each table got I find the row for each table like, var rows = table.Descendants("tr"); and then for each row I am checking that row has that header name NAME,ADDRESS,PHONENO and if it is then I skip that row and extract all information after that row foreach (var row in rows.Skip(rowNo)) { var data = new List<string>(); foreach (var column in row.Descendants("td")) { data.Add(properText); } } Such that I am extracting all information from almost many document. But now problem is sometimes what happened that in some document I can not parse the information.Like a document in which there are like 10 tables and from these 10 tables 1 table is like there are many nested tables in that table. And from these nested tables I want to find the table which tabel has column header like NAME,ADDRESS,PHONE NO.So if table may be anywhere in the document even in the nested tables or anywhere it can be find through column header name.So I can parse the information from that table and skip the outer tabular information from that table.

    Read the article

  • What scalability problems have you solved using a NoSQL data store?

    - by knorv
    NoSQL refers to non-relational data stores that break with the history of relational databases and ACID guarantees. Popular open source NoSQL data stores include: Cassandra (tabular, written in Java, used by Facebook, Twitter, Digg, Rackspace, Mahalo and Reddit) CouchDB (document, written in Erlang, used by Engine Yard and BBC) Dynomite (key-value, written in C++, used by Powerset) HBase (key-value, written in Java, used by Bing) Hypertable (tabular, written in C++, used by Baidu) Kai (key-value, written in Erlang) MemcacheDB (key-value, written in C, used by Reddit) MongoDB (document, written in C++, used by Sourceforge, Github, Electronic Arts and NY Times) Neo4j (graph, written in Java, used by Swedish Universities) Project Voldemort (key-value, written in Java, used by LinkedIn) Redis (key-value, written in C, used by Engine Yard, Github and Craigslist) Riak (key-value, written in Erlang, used by Comcast and Mochi Media) Ringo (key-value, written in Erlang, used by Nokia) Scalaris (key-value, written in Erlang, used by OnScale) ThruDB (document, written in C++, used by JunkDepot.com) Tokyo Cabinet/Tokyo Tyrant (key-value, written in C, used by Mixi.jp (Japanese social networking site)) I'd like to know about specific problems you - the SO reader - have solved using data stores and what NoSQL data store you used. Questions: What scalability problems have you used NoSQL data stores to solve? What NoSQL data store did you use? What database did you use before switching to a NoSQL data store? I'm looking for first-hand experiences, so please do not answer unless you have that.

    Read the article

  • Latex renewcommand not working properly

    - by Nazgulled
    Why is this not working: \documentclass[a4paper,10pt]{article} \usepackage{a4wide} \usepackage[T1]{fontenc} \usepackage[portuguese]{babel} \usepackage[latin1]{inputenc} \usepackage{indentfirst} \usepackage{listings} \usepackage{fancyhdr} \usepackage{url} \usepackage[compat2,a4paper,left=25mm,right=25mm,bottom=15mm,top=20mm]{geometry} \usepackage{color} \usepackage[colorlinks]{hyperref} \usepackage[pdftex]{graphicx} \renewcommand{\headrulewidth}{0.4pt} \renewcommand{\footrulewidth}{0.4pt} \pagestyle{fancy} \fancyhead[L]{\small Laboratórios de Informática III} \fancyhead[R]{\small Projecto 1 (Linguagem \textsf{C})} \lstset{ basicstyle=\ttfamily\footnotesize, showstringspaces=false, frame=single, tabsize=4, breaklines=true, } \definecolor{Section1}{rgb}{0.09,0.21,0.36} \definecolor{Section2}{rgb}{0.21,0.37,0.56} \definecolor{Section3}{rgb}{0.30,0.50,0.74} \hypersetup{ bookmarks=false, linkcolor=red, urlcolor=cyan, } \renewcommand{\section}[1]{\texorpdfstring{\color{green}#1}{#1}} \parskip=6pt \begin{document} \begin{titlepage} \begin{center} \includegraphics[width=5cm]{./logo.jpg}\\[1cm] \textsc{\LARGE Universidade do Minho}\\[1cm] \textsc{\large Licenciatura em Engenharia Informática\\Laboratórios de Informática III}\\[1.5cm] \rule{\linewidth}{0.5mm}\\[0.4cm] \huge{\textbf{\textsc{Relatório do Projecto 1 (Linguagem C)}}} \rule{\linewidth}{0.5mm} \vfill \begin{tabular}{c c} \includegraphics[width=3.5cm]{./nuno.jpg} & \includegraphics[width=3.5cm]{./ricardo.jpg} \\ \textsc{\large{Nuno Mendes (51161)}} & \textsc{\large{Ricardo Amaral (48404)}} \\ \end{tabular} \vfill \large{\today} \end{center} \end{titlepage} \tableofcontents \newpage \section{Introdução} Lorem ipsum... \newpage \appendix \section{\color{Section1}Diagrama das Estruturas de Dados} \begin{center} \includegraphics[width=16cm]{./Diagrama.pdf} \end{center} \end{document} ! LaTeX Error: Something's wrong--perhaps a missing \item. See the LaTeX manual or LaTeX Companion for explanation. Type H for immediate help. ... l.2 ...rline {1}\color {green}Teste}{3}{section.1} How can I make it work properly?

    Read the article

  • Customising Flex Datagrid or alternative solutions

    - by Martin
    I'm currently building an application that is presenting tabular (fetched from a webservice) data and have squirted it into a datagrid - seemed the most obvious way to present it on screen. I've now come across a few limitations in the datagrid and wonder how I might move forward. As a relative newcomer to flex development I'm a little lost. A few things I am wanting to do. The data is logically split into groups and I would like to be able to have subheadings in the grid whenever I move to a new group. I would like to be able to highligh individual cells based on their content relative to other values in the row - ie highlight the cell with the highest value in the row. Is this possible with the standard datagrid? I'm actually using the try-before-you-buy version of flex builder at the moment but I have ordered Flex Builder 3 Pro - which is on its way to me. I understand there is an 'advanced datagrid' control in this version - perhaps that will support some of what I wish to do? Alternatively - is there another way of building custom tabular data?

    Read the article

  • using gsub to modify output of xtable command

    - by stevejb
    Hello, my.mat <- cbind(1:5, rnorm(5), 6:10, rnorm(5)) colnames(my.mat) <- c("Turn", "Draw","Turn", "Draw") print(xtable(my.mat)) yields \begin{table}[ht] \begin{center} \begin{tabular}{rrrrr} \hline & Turn & Draw & Turn & Draw \\ \hline 1 & 1.00 & -0.72 & 6.00 & 0.91 \\ 2 & 2.00 & 0.57 & 7.00 & 0.56 \\ 3 & 3.00 & 1.08 & 8.00 & 0.55 \\ 4 & 4.00 & 0.95 & 9.00 & 0.46 \\ 5 & 5.00 & 1.94 & 10.00 & 1.06 \\ \hline \end{tabular} \end{center} \end{table} I want to filter out the \begin{table} and \end{table} lines. I can do this using gsub, but how to I get the results of print(xtable(... into a variable? Thanks for the help Stack Overflow R community!

    Read the article

  • Consecutive Tables in Latex

    - by Tim
    Hi, I wonder how to place several tables consecutively in Latex? The page with the text right before the first table has a little space but not enough for the first table, so the first table is to be placed on the top of the next page, although I use "\begin{table}[!h]" for it. The second table does not fit into the place in the rest of the page of the first table, so I think I might use longtable for it to span the rest of the page and the top of the next page. Similarly, I use longtable for the third table. The latex code is as follows: ... % some text \begin{table}[!h] \caption{Table 1. \label{tab:1}} \begin{center} \begin{tabular}{c c} ... \end{tabular} \end{center} \end{table} \begin{center} \begin{longtable}{ c c } \caption{Table 2. \label{tab:2}}\\ ... \end{longtable} \end{center} \begin{center} \begin{longtable}{ c c } \caption{Table 3. \label{tab:3}}\\ ... \end{longtable} \end{center} ... % some text In the compiled pdf file it turns out that the order of the tables is messed up. The first table is placed behind the second and third one, and the second one spans the page with text before the tables and the next page with the third one following it. I would like to know how I can make the three tables appear consecutively in order, and there are no space left blank between them and between the text and the tables? Or if what I hope is not possible, what is the best strategy then? Thanks and regards! EDIT: Removing [!h] does not make improvement, the first table is still behind the second and the third.

    Read the article

  • LaTex: how does the include-command work?

    - by HH
    I supposed the include-command copy-pastes code in the compilation, it is wrong because the code stopped working. Please, see the middle part in the code. I only copy-pasted the code to the file and added the include-command. $ cat results/frames.tex 10.31 & 8.50 & 7.40 \\ 10.34 & 8.53 & 7.81 \\ 8.22 & 8.62 & 7.78 \\ 10.16 & 8.53 & 7.44 \\ 10.41 & 8.38 & 7.63 \\ 10.38 & 8.57 & 8.03 \\ 10.13 & 8.66 & 7.41 \\ 8.50 & 8.60 & 7.15 \\ 10.41 & 8.63 & 7.21 \\ 8.53 & 8.53 & 7.12 \\ Latex code \begin{table} \begin{tabular}{ | l | m | r |} \hline $t$ / s & $d_{1}$ / s & $d_{2}$ / s \\ $\Delta h = 0,01 s$ & $\Delta d = 0,01 s$ & $\Delta d = 0,01 s$ \\ \hline % I JUST COPIED THE CODE from here to the file, included. % It stopped working, why? \include{results/frames.tex} \hline $\pi (\frac{d_{1}}{2} - \frac{d_{2}}{2})$ & $2 \pi R h$ & $2 \pi r h$ \\ \hline \end{tabular} \end{table}

    Read the article

  • Extract structured data from many MS Word files

    - by Mark
    I have ~160 MS Word files that contain structured data. The data is formatted identically across all files and resides in a tabular format. I'd like to extract the data into a database, XML or just an aggregate table without opening each of the files independently. Is there a tool or method I can use to extract this data?

    Read the article

  • Using Microsoft Office 2007 with E-Business Suite Release 12

    - by Steven Chan
    Many products in the Oracle E-Business Suite offer optional integrations with Microsoft Office and Microsoft Projects.  For example, some EBS products can export tabular reports to Microsoft Excel.  Some EBS products integrate directly with Microsoft products, and others work through the Applications Desktop Integrator (WebADI and ADI) as an intermediary.These EBS integrations have historically been documented in their respective product-specific documentation.  In other words, if an EBS product in the Oracle Financials family supported an integration with, say, Microsoft Excel, it was up to the product team to document that in the Oracle Financials documentation.Some EBS systems administrators have found the process of hunting through the various product-specific documents for Office-related information to be a bit difficult.  In response to your Service Requests and emails, we've released a new document that consolidates and summarises all patching and configuration requirements for EBS products with MS Office integration points in a single place:Using Microsoft Office 2007 with Oracle E-Business Suite 11i and R12 (Note 1072807.1)

    Read the article

  • SQL Server 2012 content on Channel 9

    - by jamiet
    A mountain of SQL Server 2012 video content featuring Greg Low, Jonathan Kehayias, Joe Sack and Roger Doherty has just been released on Channel 9. Channel 9 has great support for tags and RSS feeds so if you want to automatically download all of that content simply you can add the following RSS feed: http://channel9.msdn.com/Tags/sql+server+2012/RSS to your podcast reader of choice and have fun learning about all the new features in SQL Server 2012 such as: AlwaysOn Power View SSDT SSRS Data Alerts SSAS Tabular Modelling DAX Improvements MDS improvements SSIS improvements DQS StreamInsight improvements Data-Tier Apps (DACs) LocalDB FileTable Spatial improvements T-SQL paging Distributed Replay XEvents improvements ADO.Net Code-first T-SQL improvements Server roles Partitioning improvements ColumnStore Whew, quite a list! @jamiet

    Read the article

  • SQL Server 2012 edition comparison details are published

    - by DavidWimbush
    Interesting stuff, particularly if you're doing BI. BISM tabular and Power View will not be in Standard Edition, only in the new - presumably more expensive - Business Intelligence Edition. That kind of makes sense as you need a fairly pricey edition of SharePoint to really get all the benefits, but it's a shame there won't be some kind of limited version in Standard Edition. And Always On will be in Standard Edition but limited to 2 nodes. I really expected Always On to be Enterprise-only so this is a great decision. It allows those of us working at a more modest scale to benefit and raises the fault tolerance of SQL Server as a product to a new level.Read all about it here: http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-editions.aspx

    Read the article

  • SQL Server 2012 content on Channel 9

    - by jamiet
    A mountain of SQL Server 2012 video content featuring Greg Low, Jonathan Kehayias, Joe Sack and Roger Doherty has just been released on Channel 9. Channel 9 has great support for tags and RSS feeds so if you want to automatically download all of that content simply you can add the following RSS feed: http://channel9.msdn.com/Tags/sql+server+2012/RSS to your podcast reader of choice and have fun learning about all the new features in SQL Server 2012 such as: AlwaysOn Power View SSDT SSRS Data Alerts SSAS Tabular Modelling DAX Improvements MDS improvements SSIS improvements DQS StreamInsight improvements Data-Tier Apps (DACs) LocalDB FileTable Spatial improvements T-SQL paging Distributed Replay XEvents improvements ADO.Net Code-first T-SQL improvements Server roles Partitioning improvements ColumnStore Whew, quite a list! @jamiet

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11  | Next Page >