Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 821/1156 | < Previous Page | 817 818 819 820 821 822 823 824 825 826 827 828  | Next Page >

  • Wrong shortcut icons on the desktop and in the programs menu

    - by Zenya
    When creating a program setup, I also put some documentation or readme files in it. However after the program is installed, I get wrong icons on the desktop and in the programs menu, as below: What I expected: I thought that for the registered file types (such as pdf, txt, doc, etc.) I don't need to set icons manually in the setup project. Am i wrong? As far as I know some other installers doesn't have such problem. Does anybody know how to fix this for Visual Studio? My environment is: Visual Studio 2005, C#, Windows XP Pro SP3

    Read the article

  • ASP.NET, Visual Studio and Subversion - how to integrate?

    - by Michael Stum
    I use AnkhSVN and Visual Studio 2005 and 2008. Now, one thing that bugs me is that Ankh does not really work with ASP.NET sites. I cannot add them properly to a repository and it won't detect changes, especially because the site is on a remote server accessed through Frontpage Extensions (File = Open Site). What are the alternatives? Does a better plug-in exist? Manually downloading the files through FTP and using TortoiseSVN or svn.exe is not really the level of integration I want :) I want to stay within the Visual Studio IDE when possible. Also, I do not control the remote Server, so I can not install anything on it, which means the whole change tracking/comparison to repository has to be done on my machine.

    Read the article

  • How to initialise an array inside a struct without doing each element separately? (C++)

    - by Janet
    My questions are in the code, but basically i want to know how/if I can do the two commented out lines? I know I can do it in a constructor but I don't want to! struct foo { int b[4]; } boo; //boo.b[] = {7, 6, 5, 4}; // <- why doesn't this work? (syntax error : ']') //boo.b = {7, 6, 5, 4}; // <- or else this? (syntax error : '{') boo.b[0] = 7; // <- doing it this way is annoying boo.b[1] = 6; // : boo.b[2] = 5; // : boo.b[3] = 4; // <- doing it this way is annoying boo.b[4] = 3; // <- why does this work! (Using: C++, Visual Studio 2005.)

    Read the article

  • Is it possible to manipulate the format on a DataGridView that is bound to a Data Source?

    - by Jack Johnstone
    I´m using SQL Server 2005 and Visual Studio 2008, C#. In the data source (the SQL Server data table) I use the date format mm/dd/yyyy, however, in a forms overview (DataGridView) users would like to see a completely other format, with year, week number and day number of week (yyww,d). I´ve created an algorithm for this transformation, but can I populate the affected cells with yyww,d instead of mm/dd/yyyy? And in that case - how would I do it? I guess I need to do it after the cells are populated, but before they are shown. The generic question is - how do I manipulate the format of Data Source bound DataGridView cells.

    Read the article

  • Does Visual Studio 2010 on x64 crash often? Or is it just on my PC?

    - by JK
    MY VS2010 crashes dozens of times a day. Compare that to 2008 and 2005 which were rock solid. Is 2010 known to be susceptible to crashing? Or could it be my environment? I'm using x64 as a dev box for the first time. The only plugin I has so far is Ankh. It crashes when doing different things. One I've noticed so far that always happens is if I press the key sequence alt-f-s-up (or any cursor key) it will crash every time.

    Read the article

  • select a database and in that select tables. using c# . use web config

    - by syedsaleemss
    Im using c# .net windows form application. I have many databases created using sql server Management studio 2005. Each database has several tables. i have a button, when clicked should allow me to select a database among several databases and in that database i want to select a single table. Later i need to display the contents of the selected table into a datagrid view.I came to know that it can be done using Webconfig. How can i acheive this? It goes like this a) select a database b) In that database select a table c) display the contents into a datagridview.

    Read the article

  • best place to store .net assemblies

    - by stackoverflow
    To give a scenario, let us simply assume an engine that loads plug-ins and exposes features in the plug-in. User 1 uploads a plug-in which allows implements Act 1 User 2 uploads a plug-in which allows implements Act 2 A plug-in in this case is .net assembly. Now in this scenario, if we have to store all the assemblies - what would be the best place? Also, the plug-in would require to be versioned so execution can happen of a particular version. Further considering the plug-in engine is installed on multiple machines or on same machine as different instances (similar to sql server). Would a centralized database (sql server 2005) with a table to store all the assembly be a good idea (centralized backup etc.,) (assembly size would be around 500kb to 1MB)

    Read the article

  • two line label with expression

    - by metasequoia
    I'd like to write an axis label over two lines with an expression() statement. However, plotmath and expression won't allow this (e.g. subscript appear on the far right). I found this discussion circa 2005 of a similar issue but the work around that they offer doesn't translate to my application in ggplot2. A recent question addressed a a different permutation of multi-line expression statements, but again the work around provided doesn't apply here. Example: p <- ggplot(mtcars,aes(x=wt,y=mpg))+ geom_point()+ xlab(expression(paste("A long string of text goes here just for the purpose \n of illustrating my point Weight "[reported]))) try(ggsave(plot=p,filename=<some file>,height=4,width=6)) yields an image where subscript "reported" is kicked out to the right when I'd like it to sit next to the previous word.

    Read the article

  • long waiting time in linking

    - by ccanan
    Hi, here is the situation. I am using visual studio 2005. the solution contains lots of projects, 34 projects in all, and the start up projects depends on others. then in linking part, it'll wait a long time before the real linking starts. I am pretty sure it's because of too many projects depended, as when I use a solution with 10 of the 34 projects(keep other projects as headers&libs), it'll start instantly. so any one has any idea that I can reduce the waiting time? thx.

    Read the article

  • VSS Analyze - Access to file [filename] is denied

    - by AJ
    Our VSS database appears to be horribly out of shape. I've been trying to archive and run "analyze" and keep getting "Access to file [filename] is denied. The file may be read-only, may be in use, or you may not have permission to write to the file. Correct this problem and run analyze again." No one is logged into SourceSafe (including myself) and I'm running the analyze utility from the VS command prompt as follows: analyze -v -f -bbackuppath databasepath I get similar errors if I try and create project archives from the ssadmin tool. The database is on a network share, and we're running VSS 2005 v8.0.50727.42. I'd love to be able to do this, as it would be a first step in a move away from VSS. Thanks in advance. More Info Every time I run analyze, the file that spawns the access denied message changes. It's almost as if running analyze unlocks that file so that the next time I get through to the next one.

    Read the article

  • C# Struct No Parameterless Constructor? See what I need to accomplish

    - by Changeling
    I am using a struct to pass to an unmanaged DLL as so - [StructLayout(LayoutKind.Sequential)] public struct valTable { public byte type; public byte map; public byte spare1; public byte spare2; public int par; public int min; public byte[] name; public valTable() { name = new byte[24]; } } The code above will not compile because VS 2005 will complain that "Structs cannot contain explicit parameterless constructors". In order to pass this struct to my DLL, I have to pass an array of struct's like so valTable[] val = new valTable[281]; What I would like to do is when I say new, the constructor is called and it creates an array of bytes like I am trying to demonstrate because the DLL is looking for that byte array of size 24 in each dimension. How can I accomplish this?

    Read the article

  • Can I have a type that's both, covariant and contravariant, i.e. fully fungible/changeable with sub

    - by Water Cooler v2
    Just a stupid question. I could try it out in 2 minutes, really. It's just that I have 1 GB RAM and have already got 2 instances of VS 2010 open on my desktop, with an instance of VS 2005, too. Opening another instance of VS 2010 would be an over kill. Can I have a type (for now forgetting its semantics) that can be covariant as well as contravariant? For e.g. public interface Foo<in out T> { void DoFooWith(T arg); } Off to Eric Lippert's blog for the meat and potatoes of variance in C# 4.0 as there's little else anywhere that covers adequate ground on the subject.

    Read the article

  • What's the most simple way to retrieve all data from a table and save it back in .NET 3.5?

    - by zoman
    I have a number of tables containing some basic (business related) mapping data. What's the most simple way to load the data from those tables, then save the modified values back. (all data should be replaced in the tables) An ORM is out of question as I would like to avoid creating domain objects for each table. The actual editing of the data is not an issue. (it is exported into Excel where the data is edited, then the file is uploaded with the modified data) The technology is .NET 3.5 (ASP.NET MVC) and SQL Server 2005. Thanks.

    Read the article

  • why is this legal, c++ typedef func

    - by acidzombie24
    i did this in msvc 2005. typedef void (*cleanup_t)(); void func(cleanup_t clean) { cleanup_t(); } Why does this compile? and not give me a warning? ok, it gave me a unreferenced formal parameter warning but originally i did this when clean was in a class no there was no unreferenced formal parameter when this code gave me problems. What is cleanup_t(); really doing and what is the point? now for laughs i tried int() and that worked also.

    Read the article

  • UI to allow 700+ multiple choices

    - by Refracted Paladin
    In my desktop .NET application I have written(for internal use) where I need to allow my users to apply diagnosis's to a Member Plan. There are currently 700 in the system and growing. I need to allow them to add multiple diapnosis's at once. I currently am allowing this through a combo check list box. This works but is INSANELY unweildly for both myself and the users. What I am looking for is a how I could go about displaying these to the users. Ideally I would need to show two criteria for each each as well. Diagnosis Name and Diagnosis Code Ideas? How would you tackle this? I am using .Net 3.5sp1 and SQL 2005 for the backend. I don't care if the solution is WPF or Winforms.

    Read the article

  • .NET CF 2.0: Stream implements IDisposable ... kind of?

    - by mvanbem
    I've run into something odd in a .NET CF 2.0 project for Pocket PC 2003 (Visual Studio 2005). I was dealing with a System.IO.Stream object and found that the IDE wouldn't auto-complete the Dispose() method. I typed it in manually and received: 'System.IO.Stream.Dispose(bool)' is inaccessible due to its protection level The error is referring to the protected Dispose(bool) method. Dispose() is either private or not present. Question 1: How is this possible? Stream implements IDisposable: public abstract class Stream : MarshalByRefObject, IDisposable ... and IDisposable requires a Dispose() method: public interface IDisposable { void Dispose(); } I know the compiler won't let me get away with that in my code. Question 2: Will I cause problems by working around and disposing my streams directly? IDisposable idisp = someStream; idisp.Dispose(); The implicit cast is accepted by the compiler.

    Read the article

  • Programmatically Setting the Version of a Window's Service on the ProjectInstaller

    - by user302004
    I have a Windows Service created in Visual Studio 2005 in C#. I have a setup project and a ProjectInstaller class. I also have code to programmatically get the version from the AssemblyFileVersionAttribute. I need to figure out where I set the version that I've obtained (and where this code should go). I tried placing it in the InitializeComponent method on ProjectInstaller.Designer.cs and then appending the version to serviceInstaller1.DisplayName and serviceInstaller1.ServiceName. This didn't work and you're not supposed to modify the contents of this method. Any ideas?

    Read the article

  • ASP: using existing Crystal Report with date parameter

    - by eric3141
    I have an existing Crystal Report done in Crystal Reports version 9. I need to display it via an ASP website created in Visual Studio 2008. I have put the Crystal data source and viewer controls on the design page and configured the controls to use the crystal report but cannot seem to figure out how to pass a date to the report which uses it as an input parameter for a SQL Server 2005 stored procedure. I have tried putting a calendar control on the design page but don't know how to use it to pass the date parameter. Thanks in advance for any help. Eric

    Read the article

  • JAVA Menu Dimension

    - by ikurtz
    greetings. im am trying to learn JAVA/Swing, today is my first day. i have been able to set up a menu n my test application. but the item occupy very little space ie they are narrow. how do i go about extendng the amount of space it uses? screenshot can be view at the following URL: http://www.freeimagehosting.net/image.php?087aa4c9dc.jpg i am studying Teach Yourself Java 6 in 21 Days 5th Ed, Java Swing, 2nd Edition, 2002 and Teach Yourself Programming With Java In 24 Hours, 4th Edition (2005) bu none of them shed any light on this issue. thank you for your time.

    Read the article

  • Application start in global.asax

    - by Zerotoinfinite
    Hi experts, I am developing my application in asp.net 3.5 and sql server 2005, and I want to record the visitor info into my database, like once the visitor enter my website, I'll insert his browser details to the database. [It's not necessary that visitor login my website]. Now I am confused where to put my code, If I put insert function in every page_load then on every page it will execute and I'll not be able to get the exact number of visitor, visited my website. Shall I go with application_start in Global.asax ?? Please help.

    Read the article

  • Issue with InstalShield

    - by Alnahas
    I Use Installsheild 2009 To Deployment VS2005 Project with Sql express 2005 DB I put my exe and DB files And I typed "/qn SQLAUTOSTART=1 ADDLOCAL=ALL DISABLENETWORKPROTOCOLS=1" in command line to Make Silent Install My problem is that after I build this project and I try using it, it only works if the computer has the requirements just by one click. But if computer needs these requirements after installation, the user must click again on the setup icon to finish the setup. So first click to install requirements and second click to install my project I need the progress does not stop until all the installation is done (the needed requirements and my project)

    Read the article

  • How to use the sum the value of 2 totals in different table (Reporting Services)?

    - by dewacorp.alliances
    Hi there In report design, I have 2 tables (Current and Proposed) the structure like this: Current Parameter | Value | Rate | Total Value ... Proposed Parameter | Value | Rate | Total Value ... Each bottom of the table (Table Footer), I have something called: "Total: " which is a sum of Total field. I called these textboxes are txtbxCurrent and txtbxProposed and the format is in currency already. This thing is running well. But now I need to get a total of these txtbxCurrent and txtbxProposed. How do I do this? Can I take the value of this or not? BTW .. I am using Ms SQL Server 2005 (ReportViewer - client) Also here my dataset looks like: RecID | Type | Parameter | Value | Rate | Total 1, CURRENT, 'Param1', 100, 0.1, 10 1, CURRENT, 'Param2', 200, 0.2, 10 1, PROPOSED, 'Param1', 100, 0.2, 20 1, PROPOSED, 'Param2', 200, 0.2, 20 Thanks

    Read the article

  • Problem with the row count transform

    - by abkl
    Hi, I currently deployed an SSIS package (Developed on the 2005 version) (developed on my local server) in a pre production environment for testing. I have used the Row count transform to get a count of good/bad records. It works fine on my local system . However when i deploy this on the pre prod server, the row count does not work! (as in it does not recognize the vairbales i have assigned to the relevant transofm - no drop down abvaliable in the variables attribute part. tried deleting and adding a new transoform.. no luck. Strangely this does not work for any of the other packages also present/deployed on the same server (tried this out by dropping an rc tramsform onto an existing package... same problem) Any suggestions? Thanks a tonne

    Read the article

  • DMV {dm_os_ring_buffers} - Queries to help pinpoint current Issues / usual usage patterns

    - by NeilHambly
    I'm been running some queries (below) to help me identify when I have had time-sensitive performance issues around Memory/CPU, I didn't want to load up additional overhead to the system (unless absolutely neccessary) using traces or profiler  - naturally we have various methods to do this Perfmon counters, DBCC, DMVs etc.. One quick way I like is to run a few DMV queries (normally back in seconds) to help me find those RECENT specific time periods when the system has been substantially changed in some way using, this is using the DMV dm_os_ring_buffers This one helps me identify when I'm expericing Timeout Errors (1222).. modiy code to look for other error as highlight belowDECLARE @ts_now BIGINT,@dt_max BIGINT, @dt_min BIGINT  SELECT @ts_now = cpu_ticks / CONVERT(FLOAT, cpu_ticks_in_ms) FROM sys.dm_os_sys_info SELECT @dt_max = MAX(timestamp), @dt_min = MIN(timestamp)    FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'  SELECT       record_id      ,DATEADD(ms, -1 * (@ts_now - [timestamp]), GETDATE()) AS EventTime      ,y.Error      ,UserDefined      ,b.description as NormalizedText FROM       (       SELECT       record.value('(./Record/@id)[1]', 'int')                    AS record_id,       record.value('(./Record/Exception/Error)[1]', 'int')        AS Error,       record.value('(./Record/Exception/UserDefined)[1]', 'int')  AS UserDefined,      TIMESTAMP       FROM             (             SELECT TIMESTAMP, CONVERT(XML, record) AS record             FROM sys.dm_os_ring_buffers             WHERE ring_buffer_type = N'RING_BUFFER_EXCEPTION'             AND record LIKE '% %'            ) AS x      ) AS y INNER JOIN sys.sysmessages b on y.Error = b.error WHERE b.msglangid = 1033 and  y.Error = 1222 ORDER BY record_id DESC Sample Output record_id EventTime Error UserDefined NormalizedText 15199195 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199194 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199193 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199192 18/03/2010 14:00 1222 0 Lock request time out period exceeded. 15199191 18/03/2010 14:00 1222 0 Lock request time out period exceeded.  This one helps me identify when I have Unusally High Processing (> 50%) or # Page-FaultsSELECT record.value('(./Record/@id)[1]', 'int') AS record_id,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/SystemIdle)[1]', 'int')              AS SystemIdle,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/ProcessUtilization)[1]', 'int')      AS SQLProcessUtilization,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/UserModeTime)[1]', 'bigint')         AS UserModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/KernelModeTime)[1]', 'bigint')       AS KernelModeTime,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/PageFaults)[1]', 'bigint')           AS PageFaults,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/WorkingSetDelta)[1]', 'bigint')      AS WorkingSetDelta,record.value('(./Record/SchedulerMonitorEvent/SystemHealth/MemoryUtilization)[1]', 'int')       AS MemoryUtilization,TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_SCHEDULER_MONITOR'        AND record LIKE '% %'         ) AS x Example: Showing entries > 50% SQL CPU record_id SystemIdle SQLProcessUtilization UserModeTime KernelModeTime PageFaults WorkingSetDelta MemoryUtilization TIMESTAMP 111916 66 29 36718750 1374843750 21333 -40960 100 7991061289 111917 54 41 50156250 1954062500 26914 -28672 100 7991121290 111918 57 39 42968750 1838437500 30096 20480 100 7991181290 111919 41 53 43906250 2530156250 22088 -4096 100 7991241307 111920 48 45 40937500 2124062500 26395 8192 100 7991301310 111921 52 43 35625000 2052812500 21996 155648 100 7991361311 111922 40 55 36875000 2637343750 33355 -262144 100 7991421311 111923 36 58 44843750 2786562500 47019 28672 100 7991481311 111924 31 64 53437500 3046562500 31027 61440 100 7991541314 111925 36 57 43906250 2711250000 37074 -8192 100 7991601317 111926 52 43 43437500 2060156250 29176 20480 100 7991661318 111927 71 24 33750000 1141250000 14478 16384 100 7991721320 111928 71 23 34531250 1116250000 12711 -20480 100 7991781320 111929 53 36 46562500 1714062500 26684 200704 100 7991841323 Finally one to provide some understanding of the level of memory state changes that are ocuringSELECT record.value('(./Record/@id)[1]', 'int')                                                       AS 'record_id',record.value('(./Record/ResourceMonitor/Notification)[1]', 'VARCHAR(100)')                     AS 'ReservedMemory',record.value('(./Record/ResourceMonitor/Indicators)[1]', 'int')                                AS 'Indicators',record.value('(./Record/ResourceMonitor/Effect/@state)[1]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[1]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[1]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[2]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[2]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[2]', 'VARCHAR(100)')                           AS 'APPLY-HIGHPM',record.value('(./Record/ResourceMonitor/Effect/@state)[3]', 'VARCHAR(100)')         + ' - ' + record.value('(./Record/ResourceMonitor/Effect/@reversed)[3]', 'VARCHAR(100)')      + ' - ' + record.value('(./Record/ResourceMonitor/Effect)[3]', 'VARCHAR(100)')                           AS 'REVERT_HIGHPM',record.value('(./Record/MemoryNode/ReservedMemory)[1]', 'int')                                 AS 'ReservedMemory',record.value('(./Record/MemoryNode/CommittedMemory)[1]', 'int')                                AS 'CommittedMemory',record.value('(./Record/MemoryNode/SharedMemory)[1]', 'int')                                   AS 'SharedMemory',record.value('(./Record/MemoryNode/AWEMemory)[1]', 'int')                                      AS 'AWEMemory',record.value('(./Record/MemoryNode/SinglePagesMemory)[1]', 'int')                              AS 'SinglePagesMemory',record.value('(./Record/MemoryNode/CachedMemory)[1]', 'int')                                   AS 'CachedMemory',record.value('(./Record/MemoryRecord/MemoryUtilization)[1]', 'int')                            AS 'MemoryUtilization',record.value('(./Record/MemoryRecord/TotalPhysicalMemory)[1]', 'int')                          AS 'TotalPhysicalMemory',record.value('(./Record/MemoryRecord/AvailablePhysicalMemory)[1]', 'int')                      AS 'AvailablePhysicalMemory',record.value('(./Record/MemoryRecord/TotalPageFile)[1]', 'int')                                AS 'TotalPageFile',record.value('(./Record/MemoryRecord/AvailablePageFile)[1]', 'int')                            AS 'AvailablePageFile',record.value('(./Record/MemoryRecord/TotalVirtualAddressSpace)[1]', 'bigint')                  AS 'TotalVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableVirtualAddressSpace)[1]', 'bigint')              AS 'AvailableVirtualAddressSpace',record.value('(./Record/MemoryRecord/AvailableExtendedVirtualAddressSpace)[1]', 'bigint')      AS 'AvailableExtendedVirtualAddressSpace', TIMESTAMPFROM (        SELECT TIMESTAMP, CONVERT(XML, record) AS record         FROM sys.dm_os_ring_buffers         WHERE ring_buffer_type = N'RING_BUFFER_RESOURCE_MONITOR'        AND record LIKE '% %'        ) AS x  

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 817 818 819 820 821 822 823 824 825 826 827 828  | Next Page >