Search Results

Search found 14179 results on 568 pages for 'reportingservices 2008'.

Page 482/568 | < Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >

  • The Case for Complimentary Software Copies

    - by GGBlogger
    As the Geriatric Geek you can understand that I’ve been writing and studying for over 60 years. That means that I’ve seen insane changes in the computer software industry. I’ve made the joke that I get a new college education every 6 months or so. Of course that’s an exaggeration but it doesn’t make the feeling go away. I have a long standing and strong relationship with Microsoft so I’m armed with virtually every tool they make. It also means that I have access to tons of training material. But here’s the rub… Last year I started a definitive read of Professional Visual Basic 2008. The purpose was to fill in holes in my understanding of various things. I’m currently on page 1119 of some 1400 pages. During this sojourn I’ve decided that the future is web related which is to say that the future of “thick client” applications running as Windows applications is likely to slowly disappear. To that end I’ve taken a side trip or two into the world of ASP (including XML), Silverlight and cloud development. After carefully avoiding (that’s tongue in cheek) XML for years I finally had to bite the bullet, so to speak, and start learning XML in earnest. The most recent result of that was trail downloads of Altova’s MissionKit 2010 for Software Architects and Liquid Technologies Liquid XML Studio Developer Edition. These are both beautiful products and I want to learn them and write about them. Now comes the rub… While 30 day evaluations are generous in allowing casual users to assess these technologies for purchase they are NOT long enough to allow an author to evaluate, learn and ultimately write about them. Even if I devoted the full 30 days to learning, using and writing about say Altova’s suite I wouldn’t have enough time. Liquid XML may be a little easier to learn (one product as opposed to 8).  Add to that the fact that I frequently get sidetracked to add to my kit and it really blows out. It can be extremely frustrating when I’ve devoted hours to a project and suddenly discover that to complete it I will either need to purchase a license or abandon the project. Since my life blood does not depend on the product I end up abandoning the project and moving on. So to the folks from whom I request complimentary copies… I guarantee that if I convert your product to doing paid development work I will purchase a license to do that but as long as I am using your product to study for the purpose of writing samples, teaching use or otherwise promoting your product to other paying customers I will ask that you give me a license so that I can do that without facing the dread expiration of a 30 day trial.

    Read the article

  • Entity framework support for table valued functions and thus full text

    - by simonsabin
    One of my most popular posts with over 10, 000 hits is how to enable full text when using LINQ to SQL http://sqlblogcasts.com/blogs/simons/archive/2008/12/18/LINQ-to-SQL---Enabling-Fulltext-searching.aspx , core to this is the use of a table valued function. I’m therefore interested to see that Entity Framework will support table valued functions in the next release for more details have a read of the efdesign blog http://blogs.msdn.com/b/efdesign/archive/2011/01/21/table-valued-function-support...(read more)

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 3-5, 2012 in Milan, Italy: Register here. Endeca Information Discovery plays a key role with your big data analysis and complements Oracle Business Intelligence Solutions such as OBIEE. This FREE hands-on workshop for Oracle Partners highlights technical know-how of the product and helps understand its value proposition. We will walk you through four key components of the product: Oracle Endeca Server—A highly scalable, search-analytical database that derives the data model based on the data presented to it, thereby reducing data modeling requirements. Studio—A highly interactive, component-based user interface for configuring advanced, yet intuitive, analytical applications. Integration Suite—Provides rapid unification and enrichment of diverse sources of information into a single integrated view. Extensible Value-Added Modules—Add-on modules that provide value quickly through configuration instead of custom coding. Topics covered will include Data Exploration with Endeca Information Discovery, Data Ingest, Project Lifecycle, Building an Endeca Server data model and advanced modeling techniques, and Working with Studio. Lab Outline The labs showcase Oracle Endeca Information Discovery components and functionality by providing expertise on features and know-how of building such applications. The hands-on activities are based on a Quick Start application provided during the class. Audience Oracle Partners, Big Data Analytics Developer and Architects BI and EPM Application Developers and Implementers, Data Warehouse Developers Equipment Requirements This workshop requires attendees to provide their own laptops for this class. Attendee laptops must meet the following minimum hardware/software requirements: Hardware 8GB RAM is highly recommended (Windows 64 bit Machine is required) 40 GB free space (includes staging) USB 2.0 port (at least one available) Software One of the following operating systems: 64-bit Windows host/laptop OS (Windows 7 or Windows Server 2008) 64-bit host/laptop OS with a Windows VM (Server, or Win 7, BIC2g, etc.) Internet Explorer 8.x , Firefox 3.6 or Firefox 6.0 WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle Endeca Information Discovery Workshop Register here: October 3-5, 2012: Cinisello Balsamo, Milan.  We will confirm with you your place within 2 weeks. Questions?  Send email to: [email protected]  :  Oracle Platform Technologies Enablement Services.

    Read the article

  • Will HTML5 make Silverlight redundant?

    - by Laila
    One of the great features of Adobe AIR v2 that was launched this month was its support for some of the 2008 draft of HTML5. The HTML5 specification was started in 2004, but the full spec will probably not be approved by W3C until around 2022. One might have thought that it would take years yet from now to reach the point where any browsers were remotely HTML5-compliant, but enough of HTML5 is published and agreed to make a lot of it possible, and Safari and Adobe have got there thanks to Apple's open-source WebKit. The race for HTML 5 has been fuelled by the demand by Apple and Google for advanced graphics, typography, animations and transitions without having to rely on third party browser plug-ins such as Adobe Flash or Silverlight. There is good reason for this haste: Flash doesn't support touch-devices and has been slow in supporting hardware video decoders such as H.264. There is a strong requirement to do all that Flash can do in an open-standards way. Those with proprietary solutions remain sniffy. In AIR 2, Adobe pointedly disables the HTML5 and tags that allow basic playing of media content, saying that the specification is not final and there is still no standard for the supported formats, and adding that Safari implements a 'disjoint set' of codecs. Microsoft also has little interest in HTML 5 as it has so much invested in Silverlight. Google stands to gain by the Adobe AIR for Android as it will allow a lot of applications to be migrated easily to the platform, so sees Apple's war on Flash as a way of gaining market share. Why do we care? It is because HTML5/CSS3 provides facilities much far beyond HTML4, bring the reality of browser-based applications a lot closer. Probably most generally useful is the advanced typography: Safari and AIR already both support a way of reflowing text in a container across an arbitrary number of columns; Page-specific fonts can also be specified. Then there is 2D drawing, video, transitions, local storage, AJAX navigation and mutable DOM prototypes. HTML5 is likely to provide base functionality that is required but it is too early to be certain that it will render Flash, Silverlight or JavaFX obsolete. In the meantime, Adobe Air provides the best vehicle for developing HTML5/CSS3 applications without a twinge of worry about browser incompatibilities. Cheers, Laila

    Read the article

  • Pro SharePoint 2010 Business Intelligence Solutions

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Oh yeah baby, it’s out finally! This book is what I wanted to write for so long now, but never really got a chance to. For SharePoint 2007, I authored the SharePoint section of “Smart BI Solutions with SQL Server 2008” for MS Press. But never really got the time, to author a full book that this topic deserved. Until SharePoint 2010, we actually have a full book on this topic. So first things first, I didn’t actually write it. My role was limited to the overall concept, the outline, the layout, completion of it, code samples, identifying what we need in here, vouching for technical accuracy, identifying authors etc. The real work was done by Srini (5 chapters), and Steve (1 chapter). So credit given where it is due. But, with that said, this is a pretty good book. It has always been a challenge to find the superman that knows both, data ware housing concepts, and SharePoint concepts. The data ware housing concepts include basic stuff you need to know to work in the BI area such as cubes, MDX queries, etc. So chapter 1 covers that – and if you’re a hardcore DBA, feel free to skip Chapter 1. Then beyond that, we take every single SharePoint 2010 BI topic, and slice and dice it in detail. The topics we deal with are - Visio Services Reporting services Business Connectivity Services Excel Services PerformancePoint Services And in covering each of these topics, we ensure that a general layout was followed for each topic, to ensure completeness of content. We make sure we cover Setup related issues and advice Point and click usage Code usage, i.e. extensibility using visual studio and a walkthrough of the administration side of things, including powershell. (Yes, I insisted on that in being there in every chapter). Writing a book is always a lot of work, so we hope you find it useful. And it should go very well with the other book I just reviewed, which is Microsoft ADO.NET 4, step by step. Comment on the article ....

    Read the article

  • How Big Data and Social Won the Election

    - by Mike Stiles
    The story of big data’s influence on the outcome of the US Presidential election is worth a good look, because a) it’s a harbinger of things to come, and b) it’s an example of similar successes available to any enterprise seriously resourcing integrated big data, modeling, and data-driven execution on all assets, including social. Obama campaign manager Jim Messina fielded a data and analytics brain trust 5 times larger than 2008. At that time, there were numerous databases from various sources, few of them talking to each other. This time, the mission was to be metrics-centered and measure everything measurable, and in context with all the other data. Big data showed them exactly what they needed to know and told them what to do about it. It showed them women 40-49 on the west coast would donate big money if they got to eat with George Clooney. Women on the east coast would pony up to hang out with Sarah Jessica Parker. Extensive daily modeling showed them what kinds of email appeals, from who, and to whom, would prove most successful in raising cash, recruiting volunteers, and getting out the vote. Swing state voters were profiled and approached with more customized targeting that at any time in history. Ads were purchased on specific shows watched by the targets, increasing efficiency 14% over traditional media buys. For all the criticism of the candidate’s focus on appearing on comedy and entertainment shows, and local radio morning shows, that’s where the data sent them to reach the voters most likely to turn out for them. And then there was social. Again, more than in any other election, Facebook was used for virtual, highly efficient door-to-door canvasing. Facebook fans got pictures of friends in swing states and were asked to encourage them to act. Using that approach, 1 in 5 peer-to-peer appeals led to the desired action. Assumptions, gut, intuition, campaign experience, all took a backseat to strategy shifts solidly backed up by data. Zeroing in on demographics likely to back the President and tracking their mood daily literally changed the voter landscape. The Romney team watched Obama voters appear seemingly out of thin air. One Obama campaign aide said, “We ran the election 66,000 times every night.” Which brings us to your organization. If you’re starting to feel like the battle-cry of “but this is the way we’ve always done it” is starting to put you in an extremely vulnerable position, you’re right. Social has become a key communication tool of the 21st century. Failing to use it, or failing to invest in a deep understanding of who your customers and prospects are so the content you post there will achieve desired actions and results, will leave you waking up one morning wondering, “What happened?”@mikestilesPhoto stock.xchng

    Read the article

  • It's Here! Visual Studio 2010 and ASP.NET 4.0 Ship

    Today Microsoft released Visual Studio 2010 and ASP.NET 4.0. I've been using the RC version of Visual Studio 2010 quite a bit for the past couple of months and have really grown to like it. It has a host of features and enhancements that improve developer productivity, from improved IntelliSense to better multiple monitor support. Plus there's something about the user experience that, to me, makes it feel better than Visual Studio 2008. I don't know if it's the new blue color motif or what, but the IDE seems more modern looking and more responsive to my mouse movements and other input. Anyway, if you've not yet downloaded Visual Studio 2010 and ASP.NET 4.0, why not? As with previous versions of Visual Studio there's a free Express Edition and VS2010 and ASP.NET 4.0 runs side-by-side with earlier versions of Visual Studio and ASP.NET. And with Visual Studio 2010's multi-targeting you can even use VS2010 as your development editor for ASP.NET 2.0 and ASP.NET 3.5 web applications. (Although be forewarned if you have multiple developers working on the application that the project files in VS2010 and earlier versions of Visual Studio differ.) This week's article on 4Guys explores my favorite new features of Visual Studio 2010. Here's an excerpt: The Visual Studio 2010 user experience is noticeably different than with previous versions. Some of the changes are cosmetic - gone is the decades-old red and orange color scheme, having been replaced with blues and purples - while others are more substantial. For instance, the Visual Studio 2010 shell was rewritten from the ground up to use Microsoft's Windows Presentation Foundation (WPF). In addition to an updated user experience, Visual Studio introduces an array of new features designed to improve developer productivity. There are new tools for searching for files, types, and class members; it's now easier than ever to use IntelliSense; the Toolbox can be searched using the keyboard; and you can use a single editor - Visual Studio 2010 - to work on. This article explores some of the new features in Visual Studio 2010. It is not meant to be an exhaustive list, but rather highlights those features that I, as an ASP.NET developer, find most useful in my line of work. Read on to learn more! And, in closing, here are some helpful VS2010 and ASP.NET 4.0 links: One click installation for ASP.NET 4.0, Visual Web Developer 2010, .NET Framework 4.0, and ASP.NET MVC 2 Eight Quick Hit videos showing some of the cool new VS2010 features VS2010 and ASP.NET 4.0 Release Announcement with some great info/links from none other than Scott Guthrie Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Restoring MSDB

    - by David-Betteridge
    We recently performed a disaster recovery exercise which included the restoration of the MSDB database onto our DR server.  I did a quick google to see if there were any special considerations and found the following MS article.  Considerations for Restoring the model and msdb Databases (http://msdn.microsoft.com/en-us/library/ms190749(v=sql.105).aspx).   It said both the original and replacement servers must be on the same version,  I double-checked and in my case they are both SQL Server 2008 R2 SP1 (10.50.2500).. So I went ahead and stopped SQL Server agent, restored the database and restarted the agent.  Checked the jobs and they were all there, everything looked great, and was until the server was rebooted a few days later.Then the syspolicy_purge_history job started failing on the 3rd step with the error message “Unable to start execution of step 3 (reason: The PowerShell subsystem failed to load [see the SQLAGENT.OUT file for details]; The job has been suspended). The step failed.”   A bit more googling pointed me to the msdb.dbo.syssubsystems table SELECT * FROM msdb.dbo.syssubsystems WHERE start_entry_point ='PowerShellStart'   And in particular the value for the subsystem_dll. It still had the path to the SQLPOWERSHELLSS.DLL but on the old server. The DR instance has a different name to the live instance and so the paths are different.   This was quickly fixed with the following SQL Use msdb; GO sp_configure 'allow updates', 1 ; RECONFIGURE WITH OVERRIDE ; GO UPDATE msdb.dbo.syssubsystems SET subsystem_dll='C:\Program Files\Microsoft SQL Server\MSSQL10_50.DR\MSSQL\binn\SQLPOWERSHELLSS.DLL' WHERE start_entry_point ='PowerShellStart'; GO sp_configure 'allow updates', 0; RECONFIGURE WITH OVERRIDE ; GO Stopped and started SQL Server agent and now the job completes.   I then wondered if anything else might be broken, SELECT subsystem_dll FROM msdb.dbo.syssubsystems Shows a further 10 wrong paths – fortunately for parts of SQL (replication, SSIS etc) we aren’t using! Lessons Learnt 1.       DR exercises are a good thing! 2.       Keep the Live and DR environments as similar as possible.    

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • Certification Notes: 70-583 Designing and Developing Windows Azure Applications

    - by BuckWoody
    Last Updated: 02/01/2011 It’s time for another certification, and we’ve just release the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies. I’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books: Introducing Microsoft .NET (Pro-Developer): link   Head First C#, 2E: A Learner's Guide to Real-World Programming with Visual C# and .NET: link Microsoft Visual C# 2008 Step by Step: link  c The first place to start is at the official site for the certification. link c On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets. There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics. c Designing Data Storage Architecture (18%) Books I’m Reading: Links: My Notes: c Optimizing Data Access and Messaging (17%) Books I’m Reading: Links: My Notes: c Designing the Application Architecture (19%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform: link Links: My Notes: c Preparing for Application and Service Deployment (15%) Books I’m Reading: Links: My Notes: c Investigating and Analyzing Applications (16%) Books I’m Reading: Links: My Notes: c Designing Integrated Solutions (15%) Books I’m Reading: Applied Architecture Patterns on the Microsoft Platform (2nd mention) Links: My Notes:

    Read the article

  • What's Bringing SharePoint 2007 Server to a hault?

    - by juanlarios
    I've been having issues with my teste environment and I'm hoping someone has run into this problem and can point me in the right direction. I noticed: SharePoint Server Memory is through the roof at times and so is the CPU usage. Most of CPU usage is a sql proccess. Running out of disk space all the time. I looked in the Logs located in the 12 hive and sure enough I have 1G log files that are hard to open because of the size. The following are the 3 error messages that are flooding my SharePoint logs:   04/05/2010 16:02:36.99     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Variations Propagate Page Job Definition', id '{F9A73EB4-90FE-4574-AD99-B4034056F915}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.    04/05/2010 15:59:51.51     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Profile Synchronization', id '{A05E3439-8DCD-449A-9D9E-46D601CACAA2}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:56:25.53     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Scheduled Unpublish', id '{6298F93F-388D-46B9-809E-CEDBB8659661}' for service '{F89169F9-707B-4588-9ED0-E6D399FE5E3D}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.     04/05/2010 15:54:14.73     OWSTIMER.EXE (0x0B94)                       0x0BA4    Windows SharePoint Services       Timer                             5uuf    Monitorable    The previous instance of the timer job 'Config Refresh', id '{C42DA970-3DA3-4AA2-94E5-8499C5B80A3E}' for service '{7F6D2CBE-8071-4A30-B313-7C9989FC2D87}' is still running, so the current instance will be skipped.  Consider increasing the interval between jobs.       I'm googling around but haven't found much. I know one other person posted something about this back in 2008, but no answers were reached. I have already checked the databases to see if any of them have gone offline for whatever reason, but from SQL everything is fine. I recently re-created an SSP and deleted an old ssp. So I thought maybe that was causing it, and who knows? maybe that causes some of the problems or maybe all. I'm running configuration wizard and see if anything changes. Please if someone has had similar issues let me know.

    Read the article

  • SQL SERVER – ERROR: FIX using Compatibility Level – Database diagram support objects cannot be installed because this database does not have a valid owner – Part 2

    - by pinaldave
    Earlier I wrote a blog post about how to resolve the error with database diagram. Today I faced the same error when I was dealing with a database which is upgraded from SQL Server 2005 to SQL Server 2008 R2. When I was searching for the solution online I ended up on my own earlier solution SQL SERVER – ERROR: FIX – Database diagram support objects cannot be installed because this database does not have a valid owner. I really found it interesting that I ended up on my own solution. However, the solution to the problem this time was a bit different. Let us see how we can resolve the same. Error: Database diagram support objects cannot be installed because this database does not have a valid owner. To continue, first use the Files page of the Database Properties dialog box or the ALTER AUTHORIZATION statement to set the database owner to a valid login, then add the database diagram support objects. Workaround / Fix / Solution : Follow the steps listed below and it should for sure solve your problem. (NOTE: Please try this for the databases upgraded from previous version. For everybody else you should just follow the steps mentioned here.) Select your database >> Right Click >> Select Properties Go to the Options In the Dropdown at right labeled “Compatibility Level” choose “SQL Server 2005(90)” Select FILE in left side of page In the OWNER box, select button which has three dots (…) in it Now select user ‘sa’ or NT AUTHORITY\SYSTEM and click OK. This will solve your problem. However, there is one very important note you must consider. When you change any database owner, there are always security related implications. I suggest you check your security policies before changing authorization. I did this to quickly solve my problem on my development server. If you are on production server, you may open yourself to potential security compromise. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Emaroo 1.4.0 Released

    - by WeigeltRo
    Emaroo is a free utility for browsing most recently used (MRU) lists of various applications. Quickly open files, jump to their folder in Windows Explorer, copy their path - all with just a few keystrokes or mouse clicks. tl;dr: Emaroo 1.4.0 is out, go download it on www.roland-weigelt.de/emaroo   Why Emaroo? Let me give you a few examples. Let’s assume you have pinned Emaroo to the first spot on the task bar so you can start it by hitting Win+1. To start one of the most recently used Visual Studio solutions you type Win+1, [maybe arrow key down a few times], Enter This means that you can start the most recent solution simply by Win+1, Enter What else? If you want to open an Explorer window at the file location of the solution, you type Ctrl+E instead of Enter.   If you know that the solution contains “foo” in its name, you can type “foo” to filter the list. Because this is not a general purpose search like e.g. the Search charm, but instead operates only on the MRU list of a single application, you usually have to type only a few characters until you can press Enter or Ctrl+E.   Ctrl+C copies the file path of the selected MRU item, Ctrl+Shift+C copies the directory If you have several versions of Visual Studio installed, the context menu lets you open a solution in a higher version.   Using the context menu, you can open a Visual Studio solution in Blend. So far I have only mentioned Visual Studio, but Emaroo knows about other applications, too. It remembers the last application you used, you can change between applications with the left/right arrow or accelerator keys. Press F1 or click the Emaroo icon (the tab to the right) for a quick reference. Which applications does Emaroo know about? Emaroo knows the MRU lists of Visual Studio 2008/2010/2012/2013 Expression Blend 4, Blend for Visual Studio 2012, Blend for Visual Studio 2013 Microsoft Word 2007/2010/2013 Microsoft Excel 2007/2010/2013 Microsoft PowerPoint 2007/2010/2013 Photoshop CS6 IrfanView (most recently used directories) Windows Explorer (directories most recently typed into the address bar) Applications that are not installed aren’t shown, of course. Where can I download it? On the Emaroo website: www.roland-weigelt.de/emaroo Have fun!

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • The Birth of SSAS Compare

    - by Red Gate Software BI Tools Team
    Noemi Moreno, Red Gate Business Intelligence Specialist Software vendors – even Microsoft – tend to forget about the needs of business intelligence developers. We are a rare and rather invisible species. For example, BIDS remained in VS 2008 until SQL Server 2012. It took until this release before we got something as simple as an “undo” function. Before I joined Red Gate as a BI specialist, I worked on SQL Development. I’ll never forget the time I discovered Red Gate’s SQL Compare tool and how it reduced the task of preparing a database release from a couple of days to ten minutes. When I moved to SSAS, MDX and cubes, I became frustrated with the deployment process because I couldn’t find a tool that made Cube releases as easy as they are with SQL Compare. This became my quest. I pitched the idea to a few people in Red Gate’s regular Down Tools Week, when everyone puts down their day-to-day tasks and works on their own projects. My task was to reason with a roomful of cynical developers, hardened to the blandishments of project managers, for help to develop a tool that would compare two different SSAS databases and create the script to process only the objects that needed processing, thereby reducing release time to only a few minutes. I walked to the podium and gave them the full story of the distressed BI specialists, doomed to spend tedious hours preparing deployment scripts. A few developers recovered from their torpor to cast a languid eye at my presentation. It wasn’t enough. In a sudden impulse, I blurted out a promise to perform a flamenco dance for just the team if the tool was able to successfully compare two SSAS databases and generate a script by the end of the week. I was lucky enough that some of them believed me and jumped in: David Pond (Dev), Matt Burton (Dev), Tilman Bregler (Dev), Shobana Sekar (Test), Ruchija Raj (Test), Nick Sutherland (Product Manager) and Irma Tanovic (BI). They didn’t know that Irma and I would be away on a conference in Amsterdam and would leave them without our support. But to my surprise, they had a working tool by the time we came back – basic, and with a few bugs, but a working tool nonetheless! Seeing it compare a very basic SSAS database, detect the changes and generate the scripts was amazing! Something that normally takes half a day was done in under a minute. Since then, a few months have passed and a BI Tools team has been created at Red Gate to work full time on BI tools for BI developers, starting with SSAS Compare. How cool is that? So download the free beta and give us your feedback. And the flamenco? I still need to deliver that. Tilman reminds me every day! I need to get the full flamenco costume.

    Read the article

  • Exporting from the GAC

    - by TATWORTH
    Recently I had need to export from the GAC - here are some useful resources:http://gacassemblyexporter.codeplex.com/SourceControl/list/changesetshttp://blogs.msdn.com/b/johnwpowell/archive/2009/01/14/how-to-copy-an-assembly-from-the-gac.aspxThere is an alternative method at http://aspdotnetcodebook.blogspot.co.uk/2008/09/get-copy-of-dll-in-gac-or-add-reference.html that involves de-installing what is part of the operating system - I would recommend this as a method of last resort.

    Read the article

  • WHERE x = @x OR @x IS NULL

    - by steveh99999
    Every SQL DBA and developer should read the blog of MVP Erland Sommarskog – but particularly  his article on dynamic search conditions in T-SQL. I’ve linked above to his SQL 2005 article but his 2008 version is also a must-read. I seem to regularly come across uses of the SQL in the title above… Erland’s article explains in detail why this is inefficient, but I came across a nice example recently… A stored procedure contained the following code :- WHERE @Name is null or [Name] like @Name as a nonclustered index exists on the Name column, you might assume this would be handled efficiently by SQL Server. However, I got the following output from SET STATISTICS IO Table 'xxxxx'. Scan count 15, logical reads 47760, physical reads 9, read-ahead reads 13872, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Note the high number of logical reads… After a bit of investigation, we found that @Name could never actually be set to NULL in this particular example. ie the @x IS NULL was spurious… So, we changed the call to WHERE  [Name] like @Name Now, how much more efficient is this code ? Table 'xxxxx'. Scan count 3, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0 A nice easy win in this case…… a full index scan has been replaced by a significantly more efficient index seek. I managed to recreate the same behaviour on Adventureworks – here’s a quick query to demonstrate :- USE adventureworks SET STATISTICS IO ON DECLARE @id INT = 51721 SELECT * FROM Sales.SalesOrderDetail WHERE @id IS NULL OR salesorderid = @id SELECT * FROM Sales.SalesOrderDetail WHERE salesorderid = @id Take a look at the STATISTICS IO output and compare the actual query plans used to prove the impact of  WHERE @id IS NULL. And just to follow some of Erland’s advice – here’s how you could get similar performance if it was possible that @id could actually sometimes contain NULL. DECLARE @sql NVARCHAR(4000), @parameterlist NVARCHAR(4000) DECLARE @id INT = 51721 – or change to NULL to prove query is functionally correct SET @sql = 'SELECT * FROM Sales.SalesOrderDetail WHERE 1 = 1' IF @id IS NOT NULL SET @sql = @sql + ' AND salesorderid = @id' IF @id IS NULL SET @sql = @sql + ' AND salesorderid IS NULL' SET @parameterlist = '@id INT' EXEC sp_executesql @sql, @parameterlist,@id Sometimes I think we focus too much on hardware and SQL Server configuration – when really the answer is focus on writing efficient SQL.

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • TechEd 2010 Day Three: The Database Designer (Isn't)

    - by BuckWoody
    Yesterday at TechEd 2010 here in New Orleans I worked the front-booth, answering general SQL Server questions for the masses. I was actually a little surprised to find most of the questions I got were from folks that wanted to know more about Stream Insight and Master Data Services. In past conferences I've been asked a lot of "free consulting" questions, about problems folks have had from older products. I don't mind that a bit - in fact, I'm always happy to help in any way I can. But this time people are really interested in the new features in the product, and I like that they are thinking ahead, not just having to solve problems in production. My presentation was on "Database Design in an Hour". We had the usual fun, and SideShow Bob made an appearance - I kid you not. The guy in the back of the room looked just like Sideshow Bob, so I quickly held a "bes thair" contest, and he won. Duing the presentation, I explain the tools you can use to design databases. I also explain that the "Database Designer" tool in SQL Server Management Studio (SSMS) isn't truly a desinger - it uses non-standard notation, doesn't have a meta-data dictionary, and worst of all, it works at the physical level. In other words, whatever you do in SSMS will automatically change the field/table/relationship structures in the database. We fixed this in SSMS 2008 and higher by adding an option to block that, but the tool is not a good design function nonetheless. To be fair, no one I know of at Microsoft recommends that it is - but I was shocked to hear so many developers in the room defending it as a good tool. I think the main issue for someone who doesn't have to work with Relational Systems a great deal is that it can be difficult to figure out Foreign Keys. The syntax makes them look "backwards", so it's just easier to grab a field and place it on the table you want to point to. There are options. You can download a couple of free tools (CA has a community edition of ER-WIN, Quest has one, and Embarcadero also has one) and if you design more than one or two databases a year, it may be worth buying a true design tool. For years I used Visio, but we changed it so that it doesn't forward-engineer (create the DDL) any more, so it isn't a true design tool either. So investigate those free and not-so-free tools. You'll find they help you in your job - but stay away from the Database Designer in SSMS. Or I'll send Sideshow Bob over there to straighten you out. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • C# OpenGL problem with animation

    - by user3696303
    there is a program that simulates a small satellite and requires that a rotation animation of the satellite along the three axes. But when you try to write an animation problem during compilation: the program simply closes (shutdown occurs when swapbuffers, mainloop, redisplay), when you write the easiest programs have the same problem arose. Trying to catch exception by try-catch but here is not exception. How to solve this? I suffer with this a few days. Work in c# visual studio 2008 framework namespace WindowsFormsApplication6 { public partial class Form1 : Form { public Form1() { try { InitializeComponent(); AnT1.InitializeContexts(); } catch(Exception) { Glut.glutDisplayFunc(Draw); Glut.glutTimerFunc(50, Timer, 0); Glut.glutMainLoop(); } } void Timer(int Unused) { Glut.glutPostRedisplay(); Glut.glutTimerFunc(50, Timer, 0); } private void AnT1_Load(object sender, EventArgs e) { Glut.glutInit(); Glut.glutInitDisplayMode(Glut.GLUT_RGB | Glut.GLUT_DOUBLE | Glut.GLUT_DEPTH); Gl.glClearColor(255, 255, 255, 1); Gl.glViewport(0, 0, AnT1.Width, AnT1.Height); Gl.glMatrixMode(Gl.GL_PROJECTION); Gl.glLoadIdentity(); Glu.gluPerspective(45, (float)AnT1.Width / (float)AnT1.Height, 0.1, 200); Gl.glMatrixMode(Gl.GL_MODELVIEW); Gl.glLoadIdentity(); Gl.glEnable(Gl.GL_DEPTH_TEST); Gl.glClear(Gl.GL_COLOR_BUFFER_BIT | Gl.GL_DEPTH_BUFFER_BIT); Gl.glPushMatrix(); double xy = 0.2; Gl.glTranslated(xy, 0, 0); xy += 0.2; Draw(); Glut.glutSwapBuffers(); Glut.glutPostRedisplay(); Gl.glPushMatrix(); Draw(); Gl.glPopMatrix(); } void Draw() { Gl.glLoadIdentity(); Gl.glColor3f(0.502f, 0.502f, 0.502f); Gl.glTranslated(-1, 0, -6); Gl.glRotated(95, 1, 0, 0); Glut.glutSolidCylinder(0.7, 2, 60, 60); Gl.glLoadIdentity(); Gl.glColor3f(0, 0, 0); Gl.glTranslated(-1, 0, -6); Gl.glRotated(95, 1, 0, 0); Glut.glutWireCylinder(0.7, 2, 20, 20); } } }

    Read the article

  • Accessing controls of .aspx file in .aspx.cs without any declaration.!!??

    I am able to access the controls of ".aspx" file in ".aspx.cs" directly without any declaration in ".aspx.cs" or in designer.cs. How is this possible? This is happeing only if I open website as using File System. Create a new ASP.NET web site application with Visual Studio 2008. So following three files will be created automatically              "Default.aspx",              "Default.aspx.cs"              "Default.designer.cs" Now Delete "Default.designer.cs" perminently. Just create a button in Default.aspx file    <asp:Button runat="server" Text="Save Plan" ID="btnSave" />   Close the Solution and open the website as File System.               File -> Open Web Site -> File System -> Select Web Site Folder and Open the project.                   Now btnSave is automatically recognized in Default.aspx.cs without any declaration in Default.aspx.cs as bellow                            System.Web.UI.WebControls.Button btnSave; How btnSave is being recognized by .cs file without defining it anywhere as an object of System.Web.UI.WebControls.Button? Note: This happens only if you open Web Site from File System.           and No Declaration at all for btnSave. Please refer this article on this. span.fullpost {display:none;}

    Read the article

  • Event-Driven Debugging

    - by Brian Donahue
    Most application troubleshooting involves getting an error, analyzing the error message, and at worst, attaching a debugger to work out the real cause. What is not really covered is how to troubleshoot an applicaiton that is not errant, but is having a performance issue, and more than likely, in the middle of the night when you are snug in your bed, sawing logs. What you need is an ever-vigilant cyborg who never sleeps to sit in front of your server all night, but as SkyNet is not live yet, you can settle for the next-best thing. Windows provides performance counters and alerts that can tell you when an applicaiton reaches an unacceptable threshold of naughty behavior, but although it can tattle on your brainchild, it won't be the child psychiatrist that you need to tell you why he's pulling your server's pigtails and pulling faces at the teacher. What you need is to plug a debugger into performance monitor and have it tell you what's going on with your applicaiton at the time. For this purpose, I'd used Microsoft's MDbgEngine as the basis for an applicaiton that will dump a program's stacks, I call it Application Slicer Dicer Wonder Dumper Super Cyborg, or StackOMatic for short. StackOMatic can look at a program's behavior and tell you if the stacks are not moving, but it can also work on the command-line to dump all managed methods on the stack at will. Now that there is a command you can use to dump the stacks, all you need to do is politely tell Windows to run it when you're displeased with your creation as it's trashing the CPU of your server at 3 AM. The first step is to create a scheduled task to tell StackOMatic to dump your applicaiton. Start Task Scheduler and right-click Task Scheduler Library and then Create Task. For this exercise I'm creating a task that will dump the Red Gate SQL Monitor Base Monitor Service. In the Actions tab, I enter the path to StackOMatic and use the arguments to log the stack dump to a file: /PN:RedGate.Response.Engine.Alerting.Base.Service /OUT:c:\users\administrator\MonitorLog.txt Next, I go into Windows Server 2008's Reliability and Performance Monitor and add a new Data Collector Set. This set will produce an alert on the %Processor Time for the service. When the processor time breaches 50%, it will run the StackDumpBaseService task I created. Whenever the service misbehaves, it will append to the log file. Now when I go to work in the morning, I can see what the service was doing when it overloaded the processor and take action.

    Read the article

< Previous Page | 478 479 480 481 482 483 484 485 486 487 488 489  | Next Page >