Search Results

Search found 2562 results on 103 pages for 'dave webb'.

Page 90/103 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • SQL SERVER – Fix Visual Studio Error : Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL

    - by pinaldave
    In one of the virtual environment while I was trying to add SQL Server Database (.mdf) file to asp.net project I encountered following error: Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL:  For a long time I am using SQL Server 2012 but this error was a bit interesting to me. I realize that there should not be any need of the SQL Server 2005 installation. I quickly figured out that I can remove this error if I do as mentioned below: Open Microsoft Visual Studio Select Tools >> Options >> Database Tools >> Data Connections Enter the name of an installed instance in “SQL Server Instance Name” field. Click OK If you do not know the instance name, you can follow either of the options. 1) Use the command line sqlcmd utility 2) Using SQL Server Management Studio Is there any other way to resolve this error? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd, Visual Studio

    Read the article

  • Mouse doesn't work & internet connection not made in Ubuntu 12.04 LTS

    - by David Skare
    Yesterday, Nov 15, 2012, I booted into my Ubuntu 12.04 LTS system. It has resided on a Crucial 128 GB SSD with about 90% free space since early summer. I also have Windows 7 loaded on another Crucial 256 GB SSD. Ubuntu has set up a dual boot system for me even though each OS has its own SSD. I have been using this setup without problems since summer. Yesterday, when the boot process finished, my Microsoft Comfort Mouse 3000 did not work and there was a message that Ubuntu was not connected to the internet. So w/o the mouse I was forced to turn the machine off manually. About 4 days ago Ubuntu worked fine and booting into Win 7 also works fine. I have a backup machine with the same style mouse on it so I swapped the mouse onto this system. Same results. But both mice work when booting into Win 7. Today I removed both SSDs and installed my Ubuntu 12.04 HD which has not been used since I moved Ubuntu to the SSD from it. Same results. Between the last time I used Ubuntu 12.04 on the SSD and when I tried to use it again I made no changes to my machine, either hardware or software. My machines specs are: AMD FX-6100, MSI 990FXA-GD65 AM3+ format with latest BIOS (Ver 19.9), Corsair Vengeance 1866 MHz memory - 16 GB (4GB X 4 sticks), MSI N580GTX video card (nVidia 306.97 drivers), Sony Bravia 32" HD TV as a monitor, Pioneer BluRay DVD-RW, DSL connection to internet thru a router (10 mps), Crucial 128 GB SSD (90% free space), Microsoft Comfort Mouse 3000 I try to maintain current BIOS and drivers for all devices. I mostly use my Ubuntu system for programming in GCC and OpenCOBOL, surfing the internet and e-mailing. No games are installed. I'm stumped! If anyone has experienced this same problem I'd appreciate knowing how you solved it. TIA, Dave

    Read the article

  • Java Management Extensions with Oracle WebLogic Server 12c–Webcast Nocember 13th 2012

    - by JuergenKress
    Date: Tuesday, November 13, 2012 Time: 10:00 AM PST You’re responsible for evaluating technologies to monitor and configure Oracle WebLogic Server. This Webcast will help you get a complete picture of what Oracle WebLogic Server 12c with Java Management Extensions (JMX) can do for you. Dr. Frank Munz will explain the development of JMX with Spring and compare it to Java EE. A new feature of Oracle WebLogic Server 12c, the RESTful Management API, will also be examined. Learn how JMX in Oracle WebLogic Server 12c is: Highly efficient. It uses WebLogic Scripting Tool (WLST) instead of a client JMX program written in Java, resulting in little overhead. Effective. It bundles optimized tools such as WLST and WebLogic Diagnostic Framework to eliminate the requirement for Java programming on the client side. Compliant. It is fully standard-compliant but also works with open source clients and frameworks. Register for the Webcast today. Speakers: Dr. Frank Munz, Oracle Technologist of the Year Dave Cabelus, Senior Principal Product Manager, Oracle WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Java,Frank Munz,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • ArchBeat Link-o-Rama for December 4, 2012

    - by Bob Rhubart
    Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups | The Old Toxophilist "Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers," says The Old Toxopholist, "there are cases where it is extremely important that vServers run on distinct physical compute nodes." There's plenty more on the subject in his blog post. Oracle Endeca (2.3) Record Level Security | Adam Seed Adam Sneed's blog post covers "the basics of security within Endeca Information Discovery, as these basic security objects are required in order to explain the implementation of record level security." ODI Handling DQ | Gurcan Orhan Oracle ACE Director Gurcan Orhan suggests you have fun with these scripts for Oracle Data Integrator. Parleys Testimonial at GlassFish Community Event - JavaOne 2012 Video of Parley's webmaster Stephan Janssen's presentation at the GlassFish Community Event at JavaOne 2012, in which he explains why Parley's moved from Tomcat to GlassFish. Java Spotlight Episode 109: Pete Muir on CDI 1.1 This edition of Roger Brinkley's Java Spotlight Podcast features an interview with CDI 1.1 spec lead Pete Muir of JBoss/Red Hat. Muir talks about the features in CDI 1.1 and what to expect in the future. Webcast: Java Management Extensions with Oracle WebLogic Server 12c Dr. Frank Munz and Dave Cabelus do the talking in this on-demand webcast focused on Oracle WebLogic Server 12c with Java Management Extensions (JMX). Using the Coherence API to get Portable Object Format bytes | Bruno Borges Bruno Borges shares a code snippet that illustrates how easy it is to use the Coherence API. Thought for the Day "Experience is something you don't get until just after you need it." — Anonymous Source: SoftwareQuotes.com

    Read the article

  • Pidgin XML

    - by David Totzke
    I'm looking at an xml document that gets passed to a COM object (yes, I said the "C" word) to save a new record.  You can tell by the "new|" at the top of the file before the xml declaration.  If we were saving, there would be "edit|" at the top.  Couldn't you just have a root element with something like: <myRootElement mode="new"> Ah, here's why that won't work... There's no single root element but that's ok because next we find that this document is actually several documents.  <?xml version="1.0"?> appears several times.  The final document opens with <myElementStart> and closes with <myElementEnd> so it's not even well-formed. This isn't a style thing.  This is broken.  I mean, basic well-formed XML only has two rules; three if you count the xml declaration but it works as a document for DTO purposes without it. One root element. Close all elements with a matching tag. As a result, both ends of this conversation need to speak the same dialect of broken XML in order to communicate.  To join the conversation, you must also learn pidgin XML. How can you start out so right - XML being the obvious choice in this instance - and then go so horribly wrong? Dave Just because I can…

    Read the article

  • SQL SERVER – FIX ERROR – Cannot connect to . Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

    - by pinaldave
    Just a day ago, I was doing small attempting to connect to my local SQL Server using IP 127.0.0.1. The IP is of my local machine and SQL Server is installed on the local box as well. However, whenever I try to connect to the server it gave me following strange error. Cannot connect to 127.0.0.1. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) The reason was indeed strange as I was trying to connect from local box to local box and it said my login was from an untrusted domain. As my system is not part of any domain, this was really confusing to me. Another thing was that I have been always able to connect always using 127.0.0.1 to SQL Server and this was a bit strange to me. I started to think what did I change since it  last time I connected to SQL Server. Suddenly I remembered that I had modified my computer’s host file for some other purpose. Solution: I opened my host file and immediately added entry like 127.0.0.1 localhost. Once I added it I was able to reconnect to SQL Server as usual. The location of the host file is C:\Windows\System32\drivers\etc. You will find file with the name hosts in it, make sure to open it with notepad. If you are part of a domain and your organization is using active directory, make sure that your account is added properly to active directory as well have proper security permissions to execute the task. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Installing Ubuntu on Asus G75VW (UEFI)

    - by user101653
    You all are my last hope... help! I bought an Asus G75VW from Best Buy. It has the new UEFI BIOS instead of the old style BIOS (1980's) and has Windows 8 preinstalled. I cannot get the G75VW to install Ubuntu 12.10 in EFI mode. I did get Ubuntu to load if I changed the BIOS to CSM and the computer sees and installs Ubuntu in "legacy mode". I attempted boot repair, and Ubuntu will load after 1 minute but as legacy BIOS only. If I changed the BIOS to UEFI "Binary is whitelisted" is displayed and I get a purple screen. My goal... keep my preinstalled Windows 8 on internal drive bay 1 and install Ubuntu 12.10 on internal drive bay 2... and somehow make a choice on which to choose. I am at a loss. I am a software programmer, but I am very bad at understanding BIOS and partitioning. Any ideas? Has anyone done what I want to do. This is a full second day on my "issue"! If I cannot get Ubuntu installed, I'm returning the laptop. And "wait" until these obstacles of UEFI/EFI and properly handled to allow people to load EFI based Ubuntu without a hitch. Thanks, Dave

    Read the article

  • PASS Summit 2010 Recap

    - by AjarnMark
    Last week I attended my eighth PASS Summit in nine years, and every year it is a fantastic event!  I was fortunate my first year to have a contact (Bill Graziano (blog | Twitter) from SQLTeam) that I was expecting to meet, and who got me started on a good track of making new contacts.  Each year I have made a few more, and renewed friendships from years past.  Many of the attendees agree that the pure networking opportunities are one of the best benefits of attending the Summit.  And there’s a lot of great technical stuff, too, some of the things that stick out for me this year include… Pre-Con Monday: PowerShell with Allen White (blog | Twitter).  This was the first time that I attended a pre-con.  For those not familiar with the concept, the regular sessions for the conference are 75-90 minutes long.  For an extra fee, you can attend a full-day session on a single topic during a pre- or post-conference training day.  I had been meaning for several months to dive in and learn PowerShell, but just never seemed to find (or make) the time for it, so when I saw this was one of the all-day sessions, and I was planning to be there on Monday anyway, I decided to go for it.  And it was well worth it!  I definitely came out of there with a good foundation to build my own PowerShell scripts, plus several sample scripts that he showed which already cover the first four or five things I was planning to do with PowerShell anyway.  This looks like the right tool for me to build an automated version of our software deployment process, which right now contains many repeated steps.  Thanks Allen! Service Broker with Denny Cherry (blog | Twitter).  I remembered reading Denny’s blog post on Using Service Broker instead of Replication, and ever since then I have been thinking about using this to populate a new reporting-focused Data Repository that we will be building in the near future.  When I saw he was doing this session, I thought it would be great to get more information and be able to ask the author questions.  When I brought this idea back to my boss, he really liked it, as we had previously been discussing doing nightly data loads, with an option to manually trigger a mid-day load if up-to-the-minute data was needed for something.  If we go the Service Broker route, we can keep the Repository current in near real-time.  Hooray! DBA Mythbusters with Paul Randal (blog | Twitter).  Even though I read every one of the posts in Paul’s blog series of the same name, I had to go see the legend in person.  It was great, and I still learned something new! How to Conduct Effective Meetings with Joe Webb (blog | Twitter).  I always like to sit in on a session that Joe does.  I met Joe several years ago when both he and Bill Graziano were on the PASS Board of Directors together, and we have kept in touch.  Joe is very well-spoken and has great experience with both SQL Server and business.  And we could certainly use some pointers at my work (probably yours, too) on making our meetings more effective and to run on-time.  Of course, now that I’m the Chapter Leader for the Professional Development virtual chapter, I also had to sit in on this ProfDev session and recruit Joe to do a presentation or two for the chapter next year. Query Optimization with David DeWitt.  Anyone who has seen Dr. David DeWitt present the 3rd keynote at a PASS Summit over the last three years knows what a great time it is to sit and listen to him make some really complicated and advanced topic easy to understand (although it still makes your head hurt).  It still amazes me that the simple two-table join query from pubs that he used in his example can possibly have 22 million possible physical query plans.  Ouch! Exhibit Hall:  This year I spent more serious time in the exhibit hall than any year past.  I have talked my boss into making a significant (for us) investment in monitoring tools next year, and this was a great opportunity to talk with all the big-hitters.  Readers of mine may recall that I fell in love with the SQL Sentry Power Suite several months ago and wrote a blog entry about it just from the trial version.  Well as things turned out, short-term budget priorities shifted, and we weren’t able to make the purchase then.  I have it in the budget for next year, but since I was going to the Summit, my boss wanted me to look at the other options to see if this was really the one that we wanted.  I spent a couple of hours talking with representatives from Red-Gate, Idera, Confio, and Quest about their offerings, and giving them each the same 3 scenarios that I wanted to be able to accomplish based on the questions and issues that arise in our company.  It was interesting to discover the different approaches or “world view” that each vendor takes to the subject of performance monitoring and troubleshooting.  I may write a separate article that goes into this in more depth, but the product that best aligned with our point of view, and met the current needs we have is still the SQL Sentry Power Suite.  I’m not saying that the others are bad or wrong or anything like that, just that the way they tackled the issue did not align as well with our particular needs as does SQL Sentry’s product.  And that was something I learned too, when you go shopping for these products, you really need to know what you want to get from them.  It’s best if you have a few example scenarios from work that you can use to test out how well each tool fits your particular needs. Overall, another GREAT event.  I can’t wait to get the DVDs so I can sit in on a bunch of other sessions that I couldn’t get to because I was in one of the ones above.  And I can hardly wait until next year!

    Read the article

  • MDX using EXISTING, AGGREGATE, CROSSJOIN and WHERE

    - by James Rogers
    It is a well-published approach to using the EXISTING function to decode AGGREGATE members and nested sub-query filters.  Mosha wrote a good blog on it here and a more recent one here.  The use of EXISTING in these scenarios is very useful and sometimes the only option when dealing with multi-select filters.  However, there are some limitations I have run across when using the EXISTING function against an AGGREGATE member:   The AGGREGATE member must be assigned to the Dimension.Hierarchy being detected by the EXISTING function in the calculated measure. The AGGREGATE member cannot contain a crossjoin from any other dimension or hierarchy or EXISTING will not be able to detect the members in the AGGREGATE member.   Take the following query (from Adventure Works DW 2008):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Date].[Fiscal Weeks].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  This is useful to get a per-week average for another member. Many applications generate queries in this manner (such as Oracle OBIEE).  This query returns the correct result of (4) weeks. Now let's put a twist in it.  What if the querying application submits the query in the following manner:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Customer].[Customer Geography].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Customer].[Customer Geography].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  However, the AGGREGATE member is built on a different dimension (in name) than the one EXISTING is trying to detect.  In this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension.   Now another twist, the AGGREGATE member will be named appropriately and contain the hierarchy we are trying to detect with EXISTING but it will be cross-joined with another hierarchy:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   Once again, we are attempting to count the existing fiscal weeks in slicer.  Again, in this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension. However, in 2008 R2 this query returns the correct result of 4 and additionally , the following will return the count of existing countries as well (2):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'   member [Country Count] as 'count(existing([Customer].[Customer Geography].[Country].members))'  member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   2008 R2 seems to work as long as the AGGREGATE member is on at least one of the hierarchies attempting to be detected (i.e. [Date].[Fiscal Weeks] or [Customer].[Customer Geography]). If not, it seems that the engine cannot find a "point of entry" into the aggregate member and ignores it for calculated members.   One way around this would be to put the sets from the AGGREGATE member explicitly in the WHERE clause (slicer).  I realize this is only supported in SSAS 2005 and 2008.  However, after talking with Chris Webb (his blog is here and I highly recommend following his efforts and musings) it is a far more efficient way to filter/slice a query:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   This query returns the correct result of (4) weeks.  Additionally, we can count the cross-join members of the two hierarchies in the slicer:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members)*existing([Customer].[Customer Geography].[Country].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   We get the correct number of (8) here.

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • Deep in the Heart of Texas

    - by Applications User Experience
    Author: Erika Webb, Manager, Fusion Applications UX User Assistance When I was first working in the usability field, the only way I could consider conducting a usability study was to bring a potential user to a lab environment where I could show them whatever I was interested in learning more about and ask them questions. While I hate to reveal just how long I have been working in this field, let's just say that pads of paper and a stopwatch were key tools for any test I conducted. Over the years, I have worked in simple labs with basic video taping equipment and not much else, and I have worked in corporate environments with sophisticated usability labs and state-of-the-art equipment. Years ago, we conducted all usability studies at the location of the user. If we wanted to see if there were any differences between users in New York, Chicago, and Los Angeles, we went to those places to run the test. A lab environment is very useful for many test situations. However, there has always been a debate in the usability field about whether bringing someone into a lab environment, however friendly we make it, somehow intrinsically changes the behavior of the user as compared to having them work in their own environment, at their own desk, and on their own computer. We developed systems to create a portable usability lab, so that we could go to the users that we needed to test.  Do lab environments change user behavior patterns? Then 9/11 hit. You may not remember, but no planes flew for weeks afterwards. Companies all over the world couldn't fly-in employees for meetings. Suddenly, traveling to the location of the users had an additional difficulty. The company I was working for at the time had usability specialists stuck in New York for days before they could finally rent a car and drive home to Colorado. This changed the world pretty suddenly, and technology jumped on the change. Companies offering Internet meeting tools were strugglinguntil no one could travel. The Internet boomed with collaboration tools that enabled people to work together wherever they happened to be. This change in technology has made a huge difference in my world. We use collaborative tools to bring our product concepts and ideas to the user across the Internet. As a global company, we benefit from having users from all over the world inform our designs. We now run usability studies with users all over the world in a single day, a feat we couldn't have accomplished 10 years ago by plane! Other technology companies have started to do more of this type of usability testing, since the tools have improved so dramatically. Plus, in our busy world, it's not always easy to find users who can take the time away from their jobs to come to our labs. reaching users where it is convenient for them greatly improves the odds that people do participate. I manage a team of usability specialists who live in India and California, whlie I live in Colorado. We have wonderful labs that we bring users into to show them our products. But very often, we run our studies remotely. We used to take the lab to the users now we use the labs, but we let the users stay where they are. We gain users who might not have been able to leave work to come to our labs, and they get to use the system they are familiar with. And we gain users nearly anywhere that we can set up an Internet connection, as long as the users have a phone, a broadband connection, and a compatible Web browser (with no pop-up blockers). After we recruit participants in a traditional manner, we send them an invitation to participate through the use of a telephone conference call and Web conferencing tool. At Oracle, we use Oracle Web Conference part of Oracle Collaboration Suite, which enables us to give the user control of the mouse, while we present a prototype or wireframe pictures. We can record the sessions over the Web and phone conference. We send the users instructions, plus tips to ensure that we won't have problems sharing screens. In some cases, when time is tight, we even run a five-minute "test session" with users a day in advance to be sure that we can connect. Prior to the test, we send users a participant script that contains information about the study, including any questionnaires. This is exactly the same script we give to participants who come to the labs. We ask users to print this before the beginning of the session. We generally run these studies by having a usability engineer in our usability labs, so that we can record the session as though the user were in the lab with us. Roughly 80% of our application software usability testing at Oracle is performed using remote methods. The probability of getting a   remote test participant decreases the higher up the person is in the target organization. We have a methodology checklist available to help our usability engineers work through the remote processes.

    Read the article

  • WPF Binding to DataRow Columns

    - by Trindaz
    Hi, I've taken some sample code from http://sweux.com/blogs/smoura/index.php/wpf/2009/06/15/wpf-toolkit-datagrid-part-iv-templatecolumns-and-row-grouping/ that provides grouping of data in a WPF DataGrid. I'm modifying the example to use a DataTable instead of a Collection of entities. My problem is in translating a binding declaration {Binding Parent.IsExpanded}, which works fine where Parent is a reference to an entity that has the IsExpanded attribute, to something that will work for my weakly typed DataTable, where Parent is the name of a column and references another DataRow in the same DataTable. I've tried declarations like {Binding Parent.Items[IsExpanded]} and {Binding Parent("IsExpanded")} but none of these seem to work. How can I create a binding to the IsExpanded column of the DataRow Parent in my DataTable? Thanks in advance, Dave

    Read the article

  • Getting real-time market/stock quotes in C#/Java

    - by David Menard
    Hey, I would like to make an program that acts like a big filter for stocks. To do so, I need to have real-time (or delayed) quotes from the market. I started getting stock quotes by requesting pages from yahoo, accordingand parsing the html to the ticker, and parsing the html. I was wondering how to do this requesting and parsing html. Is there some way I can request only the stock quotes and its info? I know some applications do this, and I am very curious how they do it, because requesting web pages and parsing them is very time-consuming. Thanks, Dave

    Read the article

  • Differences & Similarities Between Programming Paradigms

    - by DaveDev
    Hi Guys I've been working as a developer for the past 4 years, with the 4 years previous to that studying software development in college. In my 4 years in the industry I've done some work in VB6 (which was a joke), but most of it has been in C#/ASP.NET. During this time, I've moved from an "object-aware" procedural paradigm to an object-oriented paradigm. Lately I've been curious about other programming paradigms out there, so I thought I'd ask other developers their opinions on the similarities & differences between these paradigms, specifically to OOP? In OOP, I find that there's a strong focus on the relationships and logical interactions between concepts. What are the mind frames you have to be in for the other paradigms? Thanks Dave

    Read the article

  • How Do You Insert Large Blobs Into Oracle 10G Using System.Data.OracleClient?

    - by discwiz
    Trying to insert 315K Gif files into an Oracle 10g database. Everytime I get this error "ora-01460: unimplemented or unreasonable conversion requested" whe I run the stored procedure. It appears that there is a 32K limit if I use a stored procedure. I read online that this does not apply if you are doing a direct insert, but I do not know how to create the insert string for a Byte Array. This is a thick client running on the server so not worried about SQL Injection attacks. Any help would be greatly appreciated. FYI, code in vb.net. Thanks, Dave

    Read the article

  • SQL Server: SELECT rows with MAX(Column A), MAX(Column B), DISTINCT by related columns

    - by z531
    Scenario: Table A MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/1/2010, 'Fred', null, null 2, 1/2/2010, 'Barney', 'Mr. Slate', 1/7/2010 3, 1/3/2010, 'Noname', null, null Table B MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/3/2010, 'Wilma', 'The Great Kazoo', 1/5/2010 2, 1/4/2010, 'Betty', 'Dino', 1/4/2010 Table C MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/5/2010, 'Pebbles', null, null 2, 1/6/2010, 'BamBam', null, null Table D MasterID, Added Date, Added By, Updated Date, Updated By, 1, 1/2/2010, 'Noname', null, null 3, 1/4/2010, 'Wilma', null, null I need to return the max added date and corresponding user, and max updated date and corresponding user for each distinct record when tables A,B,C&D are UNION'ed, i.e.: 1, 1/5/2010, 'Pebbles', 'The Great Kazoo', 1/5/2010 2, 1/6/2010, 'BamBam', 'Mr. Slate', 1/7/2010 3, 1/4/2010, 'Wilma', null, null I know how to do this with one date/user per row, but with two is beyond me. DBMS is SQL Server 2005. T-SQL solution preferred. Thanks in advance, Dave

    Read the article

  • Polynomial division overloading operator (solved)

    - by Vlad
    Ok. here's the operations i successfully code so far thank's to your help: Adittion: polinom operator+(const polinom& P) const { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(i->coef, i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(j->coef, j->pow); j++; } else { // if both are equal Result.insert(i->coef + j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Subtraction: polinom operator-(const polinom& P) const //fixed prototype re. const-correctness { polinom Result; constIter i = poly.begin(), j = P.poly.begin(); while (i != poly.end() && j != P.poly.end()) { //logic while both iterators are valid if (i->pow > j->pow) { //if the current term's degree of the first polynomial is bigger Result.insert(-(i->coef), i->pow); i++; } else if (j->pow > i->pow) { // if the other polynomial's term degree is bigger Result.insert(-(j->coef), j->pow); j++; } else { // if both are equal Result.insert(i->coef - j->coef, i->pow); i++; j++; } } //handle the remaining items in each list //note: at least one will be equal to end(), but that loop will simply be skipped while (i != poly.end()) { Result.insert(i->coef, i->pow); ++i; } while (j != P.poly.end()) { Result.insert(j->coef, j->pow); ++j; } return Result; } Multiplication: polinom operator*(const polinom& P) const { polinom Result; constIter i, j, lastItem = Result.poly.end(); Iter it1, it2, first, last; int nr_matches; for (i = poly.begin() ; i != poly.end(); i++) { for (j = P.poly.begin(); j != P.poly.end(); j++) Result.insert(i->coef * j->coef, i->pow + j->pow); } Result.poly.sort(SortDescending()); lastItem--; while (true) { nr_matches = 0; for (it1 = Result.poly.begin(); it1 != lastItem; it1++) { first = it1; last = it1; first++; for (it2 = first; it2 != Result.poly.end(); it2++) { if (it2->pow == it1->pow) { it1->coef += it2->coef; nr_matches++; } } nr_matches++; do { last++; nr_matches--; } while (nr_matches != 0); Result.poly.erase(first, last); } if (nr_matches == 0) break; } return Result; } Division(Edited): polinom operator/(const polinom& P) const { polinom Result, temp2; polinom temp = *this; Iter i = temp.poly.begin(); constIter j = P.poly.begin(); int resultSize = 0; if (temp.poly.size() < 2) { if (i->pow >= j->pow) { Result.insert(i->coef / j->coef, i->pow - j->pow); temp = temp - Result * P; } else { Result.insert(0, 0); } } else { while (true) { if (i->pow >= j->pow) { Result.insert(i->coef / j->coef, i->pow - j->pow); if (Result.poly.size() < 2) temp2 = Result; else { temp2 = Result; resultSize = Result.poly.size(); for (int k = 1 ; k != resultSize; k++) temp2.poly.pop_front(); } temp = temp - temp2 * P; } else break; } } return Result; } }; The first three are working correctly but division doesn't as it seems the program is in a infinite loop. Final Update After listening to Dave, I finally made it by overloading both / and & to return the quotient and the remainder so thanks a lot everyone for your help and especially you Dave for your great idea! P.S. If anyone wants for me to post these 2 overloaded operator please ask it by commenting on my post (and maybe give a vote up for everyone involved).

    Read the article

  • WPF DataGrid default column types

    - by Trindaz
    Hi, I'm using a DataGrid to display 2 possible types of DataRow in a DataTable. One type has the column Parent = NULL and the other has Parent set to another DataRow in the same DataTable. The list of column in the DataTable is always different, so explicitly describing each column is not possible. I want to display a UserControl in every cell of the Parent = DataRow rows, and default Text / Check boxes for the Parent = NULL rows. My first strategy is to try and set the default Column type for all automatically generated columns to be a DataGridTemplateColumn, regardless of datatype, so that I can use styles to then use either my UserControl or CheckBox or TextBox where required. How can I do this? More importantly, though, is there a better strategy than this? Cheers, Dave

    Read the article

  • How to transpose rows and columns in Access 2003?

    - by Lisa Schwaiger
    How do I transpose rows and columns in Access 2003? I have a multiple tables that I need to do this on. (I've reworded my question because feedback tells me it was confusing how I originally stated it.) Each table has 30 fields and 20 records. Lets say my fields are Name, Weight, Zip Code, Quality4, Quality5, Quality6 through Quality30 which is favorite movie. Let's say the records each describe a person. The people are Alice, Betty, Chuck, Dave, Edward etc through Tommy.. I can easily make a report like this: >>Alice...120....35055---etc, etc, etc...Jaws Betty....125....35212...etc, etc, etc...StarWars etc etc etc Tommy...200...35213...etc, etc, etc...Adaptation But what I would like to do is transpose those rows and columns so my report displays like this >>Alice........Betty......etc,etc,etc...Tommy 120.........125........etc, etc, etc...200 35055.....35212....etc, etc, etc...35213 etc etc etc Jaws...StarWars..etc,etc,etc...Adaptation Thanks for any help.

    Read the article

  • connect to web API from ..NET

    - by Saif Khan
    How can I access and consume a web API from .NET? The API is not a .NET API. Here is sample code I have in Ruby require 'uri' require 'net/http' url = URI.parse("http://account.codebasehq.com/widgets/tickets") req = Net::HTTP::Post.new(url.path) req.basic_auth('dave', '6b2579a03c2e8825a5fd0a9b4390d15571f3674d') req.add_field('Content-type', 'application/xml') req.add_field('Accept', 'application/xml') xml = "<ticket><summary>My Example Ticket</summary><status-id>1234</status-id><priority-id>1234</priority-id><ticket-type>bug</ticket-type></ticket>" res = Net::HTTP.new(url.host, url.port).start {|http| http.request(req, xml)} case res when Net::HTTPCreated puts "Record was created successfully." else puts "An error occurred while adding this record" end Where can I find information on consuming API like this from .NET? I am aware how to use .NET webservices.

    Read the article

  • Display action-specific authorisation message for [Authorize] attribute

    - by FreshCode
    Is there a way to display an action-specific authorisation message for when an [Authorize] or [Authorize(Roles="Administrator")] attribute redirects the user to the sign-in page? Ideally, [Authorize(Roles="Administrator", Message="I'm sorry Dave. I'm afraid I can't let you do that.")] public ActionResult SomeAdminFunction() { // do admin stuff return View(); } As I understand it, attributes are not meant to add functionality, but this seems purely informational. One could do this inside the action, but it seems inelegant compared to the use of an attribute. Alternatively, if (!Request.IsAuthenticated) { if (!User.IsInRole("Administrator")) SetMessage("You need to be an administrator to destroy worlds."); // write message to session stack return RedirectToAction("SignIn", "Account"); } Is there an existing way to do this or do I need to override the [Authorize] attribute?

    Read the article

  • Safari wrapping too early

    - by the Hampster
    I've created a class web page with a page for midterm review. It uses jsMath to turn Tex into nice math. (MathML looks awful) Anyway, I would occasionally like to have several problems per line. Each problem is in its own <span>, so if it needs to wrap, it won't split the problem. It all seems to work, except that Safari for the Mac seems overly anxious to wrap, sometimes wrapping at 30% paragraph width. Even under inspection, it reports a width of 663px, but wrapping occurs at around 150px. There is no padding. Firefox renders just fine. A comparison is here: http://davehampson.net/Images/Safaribug.png Sometimes Safari works just fine. The original web page is here: http://math.davehampson.net/index3.php (study guide 2) I don't know if this is a bug in safari, or if there is some odd/subtle css point I am missing. Any help would be appreciated. --Dave

    Read the article

  • Union of complex polygons

    - by grenade
    Given two polygons: POLYGON((1 0, 1 8, 6 4, 1 0)) POLYGON((4 1, 3 5, 4 9, 9 5, 4 1),(4 5, 5 7, 6 7, 4 4, 4 5)) How can I calculate the union (combined polygon)? Dave's example uses SQL server to produce the union, but I need to accomplish the same in code. I'm looking for a mathematical formula or code example in any language that exposes the actual math. I am attempting to produce maps that combine countries dynamically into regions. I asked a related question here: http://stackoverflow.com/questions/2653812/grouping-geographical-shapes

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >