Search Results

Search found 3888 results on 156 pages for 'star schema'.

Page 127/156 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Carousel not working in IE7/8

    - by user515990
    I am working with jquery.carouFredSel-4.0.3-packed.js for the carousal and it works good with IE9 and mozilla,but in IE7/8, it says "LOG: carouFredSel: Not enough items: not scrolling " whenever i am seeing it is not the case. The code i am using is <div class="carousel-wrapper"> <div class="mask"> <a class="arrow left off"><-</a> <a class="arrow left on" href="javascript:void(0);"><-</a> <ul> <dsp:droplet name="ForEach"> <dsp:param name="array" value="${listRecommended}"/> <dsp:oparam name="empty">no recommended apps</dsp:oparam> <dsp:oparam name="output"> <li> <a href="javascript:void(0);"><img src="${resourcePath}/images/apps/carousel-image1.jpg" alt="bakery story"/></a> <a href="javascript:void(0);"><dsp:valueof param="element.displayName"/></a><br/> <dsp:getvalueof var="averageRating" param="element.averageRating"/> <dsp:getvalueof var="rating" param="count"/> <div class="rating"> <div class="medium"> <dsp:droplet name="For"> <dsp:param name="howMany" value="${averageRating}"/> <dsp:oparam name="output"> <input checked="checked" class="star {split:1}" disabled="disabled" name="product-similar-'${rating}'" type="radio"> </dsp:oparam> </dsp:droplet> </div> </div> </li> </dsp:oparam> </dsp:droplet> </ul> </div> <a class="arrow right off">-></a> <a class="arrow right on" href="javascript:void(0);">-></a> </div> and javascript library is jquery.carouFredSel-4.0.3-packed.js. Please let me know if someone has faced similar problem. thanks in advance Hemish

    Read the article

  • Rails scalar query

    - by Craig
    I need to display a UI element (e.g. a star or checkmark) for employees that are 'favorites' of the current user (another employee). The Employee model has the following relationship defined to support this: has_and_belongs_to_many :favorites, :class_name => "Employee", :join_table => "favorites", :association_foreign_key => "favorite_id", :foreign_key => "employee_id" The favorites has two fields: employee_id, favorite_id. If I were to write SQL, the following query would give me the results that I want: SELECT id, account, IF( ( SELECT favorite_id FROM favorites WHERE favorite_id=p.id AND employee_id = ? ) IS NULL, FALSE, TRUE) isFavorite FROM employees Where the '?' would be replaced by the session[:user_id]. How do I represent the isFavorite scalar query in Rails? Another approach would use a query like this: SELECT id, account, IF(favorite_id IS NULL, FALSE, TRUE) isFavorite FROM employees e LEFT OUTER JOIN favorites f ON e.id=f.favorite_id AND employee_id = ? Again, the '?' is replaced by the session[:user_id] value. I've had some success writing this in Rails: ee=Employee.find(:all, :joins=>"LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=1", :select=>"employees.*,favorites.favorite_id") Unfortunately, when I try to make this query 'dynamic' by replacing the '1' with a '?', I get errors. ee=Employee.find(:all, :joins=>["LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?",1], :select=>"employees.*,favorites.favorite_id") Obviously, I have the syntax wrong, but can :joins expressions be 'dynamic'? Is this a case for a Lambda expression? I do hope to add other filters to this query and use it with will_paginate and acts_as_taggable_on, if that makes a difference. edit errors from trying to make :joins dynamic: ActiveRecord::ConfigurationError: Association named 'LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?' was not found; perhaps you misspelled it? from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1906:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1911:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `each' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1830:in `initialize' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `new' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `add_joins!' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1686:in `construct_finder_sql' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1548:in `find_every' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:615:in `find'

    Read the article

  • PHP 5.2 Function needed for GENERIC sorting of a recordset array

    - by donbriggs
    Somebody must have come up with a solution for this by now. We are using PHP 5.2. (Don't ask me why.) I wrote a PHP class to display a recordset as an HTML table/datagrid, and I wish to expand it so that we can sort the datagrid by whichever column the user selects. In the below example data, we may need to sort the recordset array by Name, Shirt, Assign, or Age fields. I will take care of the display part, I just need help with sorting the data array. As usual, I query a database to get a result, iterate throught he result, and put the records into an assciateiave array. So, we end up with an array of arrays. (See below.) I need to be able to sort by any column in the dataset. However, I will not know the column names at design time, nor will I know if the colums will be string or numeric values. I have seen a ton of solutions to this, but I have not seen a GOOD and GENERIC solution Can somebody please suggest a way that I can sort the recordset array that is GENERIC, and will work on any recordset? Again, I will not know the fields names or datatypes at design time. The array presented below is ONLY an example. UPDATE: Yes, I would love to have the database do the sorting, but that is just not going to happen. The queries that we are running are very complex. (I am not really querying a table of Star Trek characters.) They include joins, limits, and complex WHERE clauses. Writing a function to pick apart the SQL statement to add an ORDER BY is really not an option. Besides, sometimes we already have the array that is a result of the query, rather than the ability to run a new query. Array ( [0] => Array ( [name] => Kirk [shrit] => Gold [assign] => Bridge ) [1] => Array ( [name] => Spock [shrit] => Blue [assign] => Bridge ) [2] => Array ( [name] => Uhura [shrit] => Red [assign] => Bridge ) [3] => Array ( [name] => Scotty [shrit] => Red [assign] => Engineering ) [4] => Array ( [name] => McCoy [shrit] => Blue [assign] => Sick Bay ) )

    Read the article

  • Subband decomposition using Daubechies filter

    - by misha
    I have the following two 8-tap filters: h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378'] h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597'] Here they are on a graph: I'm using it to obtain the approximation (lower subband of an image). This is a(m,n) in the following diagram: I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns). My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact). Naturally, if I convolve any image with the filter, the image will get brighter. Indeed, here's what I get (expected result on right): Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1? I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later. EDIT What I'm doing is: Decompose into subbands Filter one of the subbands Recompose subbands into original image Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do: Decompose into subbands Apply intensity scaling Filter one of the subbands Apply inverse intensity scaling Recompose subbands into original image Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.

    Read the article

  • Should I create a unique clustered index, or non-unique clustered index on this SQL 2005 table?

    - by Bremer
    I have a table storing millions of rows. It looks something like this: Table_Docs ID, Bigint (Identity col) OutputFileID, int Sequence, int …(many other fields) We find ourselves in a situation where the developer who designed it made the OutputFileID the clustered index. It is not unique. There can be thousands of records with this ID. It has no benefit to any processes using this table, so we plan to remove it. The question, is what to change it to… I have two candidates, the ID identity column is a natural choice. However, we have a process which does a lot of update commands on this table, and it uses the Sequence to do so. The Sequence is non-unique. Most records only contain one, but about 20% can have two or more records with the same Sequence. The INSERT app is a VB6 piece of crud throwing thousands insert commands at the table. The Inserted values are never in any particular order. So the Sequence of one insert may be 12345, and the next could be 12245. I know that this could cause SQL to move a lot of data to keep the clustered index in order. However, the Sequence of the inserts are generally close to being in order. All inserts would take place at the end of the clustered table. Eg: I have 5 million records with Sequence spanning 1 to 5 million. The INSERT app will be inserting sequence’s at the end of that range at any given time. Reordering of the data should be minimal (tens of thousands of records at most). Now, the UPDATE app is our .NET star. It does all UPDATES on the Sequence column. “Update Table_Docs Set Feild1=This, Field2=That…WHERE Sequence =12345” – hundreds of thousands of these a day. The UPDATES are completely and totally, random, touching all points of the table. All other processes are simply doing SELECT’s on this (Web pages). Regular indexes cover those. So my question is, what’s better….a unique clustered index on the ID column, benefiting the INSERT app, or a non-unique clustered index on the Sequence, benefiting the UPDATE app?

    Read the article

  • JQM dialog is opening in new page instead of dialog

    - by K D
    Thank you for taking the time to read my question. I'm trying to get a dialog box to open using Jquery mobile. I followed the documentation and used the data-rel="dialog" notation along with the data-transition="pop". Instead of a dialog appearing on the same page, I get a brand new page with the dialog appearing. Can someone kindly assist me on how to fix this functionality. Here is my code for the initial main page: <article> <ul data-role="listview" data-split-icon="star" data-split-theme="d" data-inset="true"> <li><a href="#black_seed_desc" data-rel="dialog" ><img src="black_seed.jpg"/> <h3>Black Seed Oil</h3> </a> <a href="#black_seed_purchase" data-rel="dialog" data-transition="pop">Purchase Black Seed Oil</a> </li> </ul> </article> Here is my code for the dialog page: <div data-role="dialog" id="black_seed_purchase" data-theme="c"> <section data-role="content"> <h1>Purchase Black Seed Oil?</h1> <p>By purchasing Black Seed Oil you will receive an email receipt copy sent to you for your reference.</p> <a href="#purchase_blackseed" data-inline="true" data-corners="true" data-rel="back" data-role="button" data-shadow="true" data-iconshadow="true" data-wrapperrels="span"> <span> <span>Buy: $49.99</span> <span>&nbsp;</span> </span> </a> <a href="#" data-role="button" data-rel="back" data-inline="true" data-corners="true" data-wrapperrels="span" data-shadow="true" data-iconshawdow="true"> <span> Cancel </span> </a> </section> </div> Here is a working example. http://jsfiddle.net/Gajotres/w3ptm/?

    Read the article

  • April 14th Links: ASP.NET, ASP.NET MVC, ASP.NET Web API and Visual Studio

    - by ScottGu
    Here is the latest in my link-listing blog series: ASP.NET Easily overlooked features in VS 11 Express for Web: Good post by Scott Hanselman that highlights a bunch of easily overlooked improvements that are coming to VS 11 (and specifically the free express editions) for web development: unit testing, browser chooser/launcher, IIS Express, CSS Color Picker, Image Preview in Solution Explorer and more. Get Started with ASP.NET 4.5 Web Forms: Good 5-part tutorial that walks-through building an application using ASP.NET Web Forms and highlights some of the nice improvements coming with ASP.NET 4.5. What is New in Razor V2 and What Else is New in Razor V2: Great posts by Andrew Nurse, a dev on the ASP.NET team, about some of the new improvements coming with ASP.NET Razor v2. ASP.NET MVC 4 AllowAnonymous Attribute: Nice post from David Hayden that talks about the new [AllowAnonymous] filter introduced with ASP.NET MVC 4. Introduction to the ASP.NET Web API: Great tutorial by Stephen Walher that covers how to use the new ASP.NET Web API support built-into ASP.NET 4.5 and ASP.NET MVC 4. Comprehensive List of ASP.NET Web API Tutorials and Articles: Tugberk Ugurlu links to a huge collection of articles, tutorials, and samples about the new ASP.NET Web API capability. Async Mashups using ASP.NET Web API: Nice post by Henrik on how you can use the new async language support coming with .NET 4.5 to easily and efficiently make asynchronous network requests that do not block threads within ASP.NET. ASP.NET and Front-End Web Development Visual Studio 11 and Front End Web Development - JavaScript/HTML5/CSS3: Nice post by Scott Hanselman that highlights some of the great improvements coming with VS 11 (including the free express edition) for front-end web development. HTML5 Drag/Drop and Async Multi-file Upload with ASP.NET Web API: Great post by Filip W. that demonstrates how to implement an async file drag/drop uploader using HTML5 and ASP.NET Web API. Device Emulator Guide for Mobile Development with ASP.NET: Good post from Rachel Appel that covers how to use various device emulators with ASP.NET and VS to develop cross platform mobile sites. Fixing these jQuery: A Guide to Debugging: Great presentation by Adam Sontag on debugging with JavaScript and jQuery.  Some really good tips, tricks and gotchas that can save a lot of time. ASP.NET and Open Source Getting Started with ASP.NET Web Stack Source on CodePlex: Fantastic post by Henrik (an architect on the ASP.NET team) that provides step by step instructions on how to work with the ASP.NET source code we recently open sourced. Contributing to ASP.NET Web Stack Source on CodePlex: Follow-on to the post above (also by Henrik) that walks-through how you can submit a code contribution to the ASP.NET MVC, Web API and Razor projects. Overview of the WebApiContrib project: Nice post by Pedro Reys on the new open source WebApiContrib project that has been started to deliver cool extensions and libraries for use with ASP.NET Web API. Entity Framework Entity Framework 5 Performance Improvements and Performance Considerations for EF5:  Good articles that describes some of the big performance wins coming with EF5 (which will ship with both .NET 4.5 and ASP.NET MVC 4). Automatic compilation of LINQ queries will yield some significant performance wins (up to 600% faster). ASP.NET MVC 4 and EF Database Migrations: Good post by David Hayden that covers the new database migrations support within EF 4.3 which allows you to easily update your database schema during development - without losing any of the data within it. Visual Studio What's New in Visual Studio 11 Unit Testing: Nice post by Peter Provost (from the VS team) that talks about some of the great improvements coming to VS11 for unit testing - including built-in VS tooling support for a broad set of unit test frameworks (including NUnit, XUnit, Jasmine, QUnit and more) Hope this helps, Scott

    Read the article

  • Links to my “Best of 2010” Posts

    - by ScottGu
    I hope everyone is having a Happy New Years! 2010 has been a busy blogging year for me (this is the 100th blog post I’ve done in 2010).  Several people this week suggested I put together a summary post listing/organizing my favorite posts from the year.  Below is a quick listing of some of my favorite posts organized by topic area: VS 2010 and .NET 4 Below is a series of posts I wrote (some in late 2009) about the VS 2010 and .NET 4 (including ASP.NET 4 and WPF 4) release we shipped in April: Visual Studio 2010 and .NET 4 Released Clean Web.Config Files Starter Project Templates Multi-targeting Multiple Monitor Support New Code Focused Web Profile Option HTML / ASP.NET / JavaScript Code Snippets Auto-Start ASP.NET Applications URL Routing with ASP.NET 4 Web Forms Searching and Navigating Code in VS 2010 VS 2010 Code Intellisense Improvements WPF 4 Add Reference Dialog Improvements SEO Improvements with ASP.NET 4 Output Cache Extensibility with ASP.NET 4 Built-in Charting Controls for ASP.NET and Windows Forms Cleaner HTML Markup with ASP.NET 4 - Client IDs Optional Parameters and Named Arguments in C# 4 - and a cool scenarios with ASP.NET MVC 2 Automatic Properties, Collection Initializers and Implicit Line Continuation Support with VB 2010 New <%: %> Syntax for HTML Encoding Output using ASP.NET 4 JavaScript Intellisense Improvements with VS 2010 VS 2010 Debugger Improvements (DataTips, BreakPoints, Import/Export) Box Selection and Multi-line Editing Support with VS 2010 VS 2010 Extension Manager (and the cool new PowerCommands Extension) Pinning Projects and Solutions VS 2010 Web Deployment Debugging Tips/Tricks with Visual Studio Search and Navigation Tips/Tricks with Visual Studio Visual Studio Below are some additional Visual Studio posts I’ve done (not in the first series above) that I thought were nice: Download and Share Visual Studio Color Schemes Visual Studio 2010 Keyboard Shortcuts VS 2010 Productivity Power Tools Fun Visual Studio 2010 Wallpapers Silverlight We shipped Silverlight 4 in April, and announced Silverlight 5 the beginning of December: Silverlight 4 Released Silverlight 4 Tools for VS 2010 and WCF RIA Services Released Silverlight 4 Training Kit Silverlight PivotViewer Now Available Silverlight Questions Announcing Silverlight 5 Silverlight for Windows Phone 7 We shipped Windows Phone 7 this fall and shipped free Visual Studio development tools with great Silverlight and XNA support in September: Windows Phone 7 Developer Tools Released Building a Windows Phone 7 Twitter Application using Silverlight ASP.NET MVC We shipped ASP.NET MVC 2 in March, and started previewing ASP.NET MVC 3 this summer.  ASP.NET MVC 3 will RTM in less than 2 weeks from today: ASP.NET MVC 2: Strongly Typed Html Helpers ASP.NET MVC 2: Model Validation Introducing ASP.NET MVC 3 (Preview 1) Announcing ASP.NET MVC 3 Beta and NuGet (nee NuPack) Announcing ASP.NET MVC 3 Release Candidate 1  Announcing ASP.NET MVC 3 Release Candidate 2 Introducing Razor – A New View Engine for ASP.NET ASP.NET MVC 3: Layouts with Razor ASP.NET MVC 3: New @model keyword in Razor ASP.NET MVC 3: Server-Side Comments with Razor ASP.NET MVC 3: Razor’s @: and <text> syntax ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor ASP.NET MVC 3: Layouts and Sections with Razor IIS and Web Server Stack The IIS and Web Stack teams have made a bunch of great improvements to the core web server this year: Fix Common SEO Problems using the URL Rewrite Extension Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Introducing IIS Express SQL CE 4 (New Embedded Database Support with ASP.NET) Introducing Web Matrix EF Code First EF Code First is a really nice new data option that enables a very clean code-oriented data workflow: Announcing Entity Framework Code-First CTP5 Release Class-Level Model Validation with EF Code First and ASP.NET MVC 3 Code-First Development with Entity Framework 4 EF 4 Code First: Custom Database Schema Mapping Using EF Code First with an Existing Database jQuery and AJAX Contributions My team began making some significant source code contributions to the jQuery project this year: jQuery Templates, Data Link and Globalization Accepted as Official jQuery Plugins jQuery Templates and Data Linking (and Microsoft contributing to jQuery) jQuery Globalization Plugin from Microsoft Patches and Hot Fixes Some useful fixes you can download prior to VS 2010 SP1: Patch for Cut/Copy “Insufficient Memory” issue with VS 2010 Patch for VS 2010 Find and Replace Dialog Growing Patch for VS 2010 Scrolling Context Menu Videos of My Talks Some recordings of technical talks I’ve done this year: ASP.NET 4, ASP.NET MVC, and Silverlight 4 Talks I did in Europe VS 2010 and ASP.NET 4 Web Forms Talk in Arizona Other About Technical Debates (and ASP.NET Web Forms and ASP.NET MVC debates in particular) ASP.NET Security Fix Now on Windows Update Upcoming Web Camps I’d like to say a big thank you to everyone who follows my blog – I really appreciate you reading it (the comments you post help encourage me to write it).  See you in the New Year! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • SOA 10g Developing a Simple Hello World Process

    - by [email protected]
    Softwares & Hardware Needed Intel Pentium D CPU 3 GHz, 2 GB RAM, Windows XP System ( Thats what i am using ) You could as well use Linux , but please choose High End RAM 10G SOA Suite from Oracle(TM) , Read Installation documents at www.Oracle.com J Developer 10.1.3.3 Official Documents at http://www.oracle.com/technology/products/ias/bpel/index.html java -version Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode)BPEL Introduction - Developing a Simple Hello World Process  Synchronous BPEL Process      This Exercise focuses on developing a Synchronous Process, which mean you give input to the BPEL Process you get output immediately no waiting at all. The Objective of this exercise is to give input as name and it greets with Hello Appended by that name example, if I give input as "James" the BPEL process returns "Hello James". 1. Open the Oracle JDeveloper click on File -> New Application give the name "JamesApp" you can give your own name if it pleases you. Select the folder where you want to place the application. Click "OK" 2. Right Click on the "JamesApp" in the Application Navigator, Select New Menu. 3. Select "Projects" under "General" and "BPEL Process Project", click "OK" these steps remain same for all BPEL Projects 4. Project Setting Wizard Appears, Give the "Process Name" as "MyBPELProc" and Namespace as http://xmlns.james.com/ MyBPELProc, Select Template as "Synchronous BPEL Process click "Next" 5. Accept the input and output schema names as it is, click "Finish" 6. You would see the BPEL Process Designer, some of the folders such as Integration content and Resources are created and few more files 7. Assign Activity : Allows Assigning values to variables or copying values of one variable to another and also do some string manipulation or mathematical operations In the component palette at extreme right, select Process Activities from the drop down, and drag and drop "Assign" between "receive Input" and "replyOutput" 8. You can right click and edit the Assign activity and give any suitable name "AssignHello", 9. Select "Copy Operation" Tab create "Copy Operation" 10. In the From variables click on expression builder, select input under "input variable", Click on insert into expression bar, complete the concat syntax, Note to use "Ctrl+space bar" inside expression window to Auto Populate the expression as shown in the figure below. What we are actually doing here is concatenating the String "Hello ", with the variable value received through the variable named "input" 11. Observe that once an expression is completed the "To Variable" is assigned to a variable by name "result" 12. Finally the copy variable looks as below 13. It's the time to deploy, start the SOA Suite 14. Establish connection to the Server from JDeveloper, this can be done adding a New Application Server under Connection, give the server name, username and password and test connection. 15. Deploy the "MyBPELProc" to the "default domain" 16. http://localhost:8080/ allows connecting to SOA Suite web portal, click on "BPEL Control" , login with the username "oc4jadmin" password what ever you gave during installation 17. "MyBPELProc" is visisble under "Deployed BPEL Processes" in the "Dashboard" Tab, click on the it 18. Initiate tab open to accept input, enter data such as input is "James" click on "Post XML Button" 19. Click on Visual Flow 20. Click on receive Input , it shows "James" as input received 21. Click on reply Output, it shows "Hello James" so the BPEL process is successfully executed. 22. It may be worth seeing all the instance created everytime a BPEL process is executed by giving some inputs. Purge All button allows to delete all the unwanted previous instances of BPEL process, dont worry it wont delete the BPEL process itself :-) 23. It may also be some importance to understand the XSD File which holds input & output variable names & data types. 24. You could drag n drop variables as elements over sequence at the designer or directly edit the XML Source file. 

    Read the article

  • Query Logging in Analysis Services

    - by MikeD
    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time. We've learned a couple of helpful things about this logging that I'd like to share here.First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:Here is our trigger code:CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS       INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)      SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to. The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time. Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:  To sum up:The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another tableThe SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read the article

  • How to Create Views for All Tables with Oracle SQL Developer

    - by thatjeffsmith
    Got this question over the weekend via a friend and Oracle ACE Director, so I thought I would share the answer here. If you want to quickly generate DDL to create VIEWs for all the tables in your system, the easiest way to do that with SQL Developer is to create a data model. Wait, why would I want to do this? StackOverflow has a few things to say on this subject… So, start with importing a data dictionary. Step One: Open of Create a Model In SQL Developer, go to View – Data Modeler – Browser. Then in the browser panel, expand your design and create a new Relational Model. Step Two: Import your Data Dictionary This is a fancy way of saying, ‘suck objects out of the database into my model’ This will open a wizard to connect, select your schema(s), objects, etc. Once they’re in your model, you’re ready to cook with gas I’m using HR (Human Resources) for this example. You should end up with something that looks like this. Our favorite HR model Now we’re ready to generate the views! Step Three: Auto-generate the Views Go to Tools – Data Modeler – Table to View Wizard. I don’t want all my tables included, and I want to change the naming standard Decide if you want to change the default generated view names By default the views will be created as ‘V_TABLE_NAME.’ If you don’t like the ‘V_’ you can enter your own. You also can reference the object and model name with variables as shown in the screenshot above. I’m going to go with something a little more personal. The views are the little green boxes in the diagram Can’t find your views? They should be grouped together in your diagram. Don’t forget to use the Navigator to easily find and navigate to those model diagram objects! Step Four: Generate the DDL Ok, let’s use the Generate DDL button on the toolbar. Un-check everything but your views If you used a prefix, take advantage of that to create a filter. You might have existing views in your model that you don’t want to include, right? Once you click ‘OK’ the DDL will be generated. -- Generated by Oracle SQL Developer Data Modeler 4.0.0.825 -- at: 2013-11-04 10:26:39 EST -- site: Oracle Database 11g -- type: Oracle Database 11g CREATE OR REPLACE VIEW HR.TJS_BLOG_COUNTRIES ( COUNTRY_ID , COUNTRY_NAME , REGION_ID ) AS SELECT COUNTRY_ID , COUNTRY_NAME , REGION_ID FROM HR.COUNTRIES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_EMPLOYEES ( EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , FIRST_NAME , LAST_NAME , EMAIL , PHONE_NUMBER , HIRE_DATE , JOB_ID , SALARY , COMMISSION_PCT , MANAGER_ID , DEPARTMENT_ID FROM HR.EMPLOYEES ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOBS ( JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY ) AS SELECT JOB_ID , JOB_TITLE , MIN_SALARY , MAX_SALARY FROM HR.JOBS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_JOB_HISTORY ( EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID ) AS SELECT EMPLOYEE_ID , START_DATE , END_DATE , JOB_ID , DEPARTMENT_ID FROM HR.JOB_HISTORY ; CREATE OR REPLACE VIEW HR.TJS_BLOG_LOCATIONS ( LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID ) AS SELECT LOCATION_ID , STREET_ADDRESS , POSTAL_CODE , CITY , STATE_PROVINCE , COUNTRY_ID FROM HR.LOCATIONS ; CREATE OR REPLACE VIEW HR.TJS_BLOG_REGIONS ( REGION_ID , REGION_NAME ) AS SELECT REGION_ID , REGION_NAME FROM HR.REGIONS ; -- Oracle SQL Developer Data Modeler Summary Report: -- -- CREATE TABLE 0 -- CREATE INDEX 0 -- ALTER TABLE 0 -- CREATE VIEW 6 -- CREATE PACKAGE 0 -- CREATE PACKAGE BODY 0 -- CREATE PROCEDURE 0 -- CREATE FUNCTION 0 -- CREATE TRIGGER 0 -- ALTER TRIGGER 0 -- CREATE COLLECTION TYPE 0 -- CREATE STRUCTURED TYPE 0 -- CREATE STRUCTURED TYPE BODY 0 -- CREATE CLUSTER 0 -- CREATE CONTEXT 0 -- CREATE DATABASE 0 -- CREATE DIMENSION 0 -- CREATE DIRECTORY 0 -- CREATE DISK GROUP 0 -- CREATE ROLE 0 -- CREATE ROLLBACK SEGMENT 0 -- CREATE SEQUENCE 0 -- CREATE MATERIALIZED VIEW 0 -- CREATE SYNONYM 0 -- CREATE TABLESPACE 0 -- CREATE USER 0 -- -- DROP TABLESPACE 0 -- DROP DATABASE 0 -- -- REDACTION POLICY 0 -- -- ERRORS 0 -- WARNINGS 0 You can then choose to save this to a file or not. This has a few steps, but as the number of tables in your system increases, so does the amount of time this feature can save you!

    Read the article

  • SQL SERVER – Guest Post – Jacob Sebastian – Filestream – Wait Types – Wait Queues – Day 22 of 28

    - by pinaldave
    Jacob Sebastian is a SQL Server MVP, Author, Speaker and Trainer. Jacob is one of the top rated expert community. Jacob wrote the book The Art of XSD – SQL Server XML Schema Collections and wrote the XML Chapter in SQL Server 2008 Bible. See his Blog | Profile. He is currently researching on the subject of Filestream and have submitted this interesting article on the very subject. What is FILESTREAM? FILESTREAM is a new feature introduced in SQL Server 2008 which provides an efficient storage and management option for BLOB data. Many applications that deal with BLOB data today stores them in the file system and stores the path to the file in the relational tables. Storing BLOB data in the file system is more efficient that storing them in the database. However, this brings up a few disadvantages as well. When the BLOB data is stored in the file system, it is hard to ensure transactional consistency between the file system data and relational data. Some applications store the BLOB data within the database to overcome the limitations mentioned earlier. This approach ensures transactional consistency between the relational data and BLOB data, but is very bad in terms of performance. FILESTREAM combines the benefits of both approaches mentioned above without the disadvantages we examined. FILESTREAM stores the BLOB data in the file system (thus takes advantage of the IO Streaming capabilities of NTFS) and ensures transactional consistency between the BLOB data in the file system and the relational data in the database. For more information on the FILESTREAM feature, visit: http://beyondrelational.com/filestream/default.aspx FILESTREAM Wait Types Since this series is on the different SQL Server wait types, let us take a look at the various wait types that are related to the FILESTREAM feature. FS_FC_RWLOCK This wait type is generated by FILESTREAM Garbage Collector. This occurs when Garbage collection is disabled prior to a backup/restore operation or when a garbage collection cycle is being executed. FS_GARBAGE_COLLECTOR_SHUTDOWN This wait type occurs when during the cleanup process of a garbage collection cycle. It indicates that that garbage collector is waiting for the cleanup tasks to be completed. FS_HEADER_RWLOCK This wait type indicates that the process is waiting for obtaining access to the FILESTREAM header file for read or write operation. The FILESTREAM header is a disk file located in the FILESTREAM data container and is named “filestream.hdr”. FS_LOGTRUNC_RWLOCK This wait type indicates that the process is trying to perform a FILESTREAM log truncation related operation. It can be either a log truncate operation or to disable log truncation prior to a backup or restore operation. FSA_FORCE_OWN_XACT This wait type occurs when a FILESTREAM file I/O operation needs to bind to the associated transaction, but the transaction is currently owned by another session. FSAGENT This wait type occurs when a FILESTREAM file I/O operation is waiting for a FILESTREAM agent resource that is being used by another file I/O operation. FSTR_CONFIG_MUTEX This wait type occurs when there is a wait for another FILESTREAM feature reconfiguration to be completed. FSTR_CONFIG_RWLOCK This wait type occurs when there is a wait to serialize access to the FILESTREAM configuration parameters. Waits and Performance System waits has got a direct relationship with the overall performance. In most cases, when waits increase the performance degrades. SQL Server documentation does not say much about how we can reduce these waits. However, following the FILESTREAM best practices will help you to improve the overall performance and reduce the wait types to a good extend. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology Tagged: Filestream

    Read the article

  • POP Forums v9 Beta 1 for ASP.NET MVC 3 posted to CodePlex!

    - by Jeff
    As promised, I posted a beta build of my forum app for ASP.NET MVC 3. Get the new goodies here: http://popforums.codeplex.com/releases/view/58228 This is the first beta for the ASP.NET MVC 3 version of POP Forums. It is nearly feature complete, and ready for testing and feedback. For previous release notes, look here, here and here.Check out the live preview: http://preview.popforums.com/ForumsSetup instructions are on the home page of this project. The new hotness in the beta, or what has been done since the last preview: All views converted to use Razor E-mail subscription/notification of new posts New post indicators/mark read buttons Permalinks to posts Jump to newest post (from new post indicators) Recent topics Favorite topics Moderator functions for topics (pin/close/delete, plus move and rename) Search, ported from v8. Not a ton of optimization here, or new unit testing, but the old version worked pretty well User posts (topics the user posted in) Forgot password Vanity items (signatures and avatars) Hide vanity items per user preference Some minor data caching where appropriate A little bit of UI refinement Lots-o-bug fixes Lots-o-unit tests What's next? The plan between now and the next beta is as follows: Continue working through features/tasks, and fix bugs as they're reported Integrate the forum into a real, production site Refine the UI Refactor as much as possible... the code organization is not entirely logical in some places After the second beta, a release candidate will follow, with a real "final" release after that. Subsequent releases should come relatively frequently and without a lot of risk. The trick in building this thing has been that it mostly tossed the previous WebForms version, which was all full of crusties. The time table for this is a little harder to pin down, as day jobs and families will have their effect. Other notes Refactoring will be a priority. As the features of MVC have evolved, so have my desires to use it in a fashion that makes things clear and easy to follow. I don't even know if anyone will ever start mucking around in the code, but on the off chance they do, I'd like what they find to not suck. Other nice-to-haves are builds to target Windows Azure and SQL CE. A nice setup UI would be super too. I think the ASP.NET MVC world has gone long enough without a decent forum.The biggest challenge that I've found is making the forum something that can be dropped in any app. While it does rope its views into an area, areas are mostly just routing details. I haven't thought of a clever way yet to limit dependency injection, for example, to just the forum bits. I mean, everyone should be using Ninject, but how realistic is that? ;)How much time and effort should you spend on POP Forums in its current state? Change is inevitable, but at this point I'm reasonably committed to not changing the database schema. I really think it will stay as-is. All bets are off for the various interfaces throughout the app, but the data should generally resist change. It's not even that different from v8, which was one of the original goals because I didn't want to rewrite SQL or introduce a new ORM or whatever. My point is that if you wanted to build a site around this today, even though it's not entirely functional, I think it's low risk in terms of data loss. I can't vouch for whether or not you know what you're doing.I've been having some chats with people lately about quoting posts, and honestly there has to be something better and straight forward. That continues to be a holy grail of mine, and some day, I hope to find it.Enjoy... it's starting to feel more real every day!

    Read the article

  • New Communications Industry Data Model with "Factory Installed" Predictive Analytics using Oracle Da

    - by charlie.berger
    Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications Service Providers   We've integrated pre-installed analytical methodologies with the new Oracle Communications Data Model to deliver automated, simple, yet powerful predictive analytics solutions for customers.  Churn, sentiment analysis, identifying customer segments - all things that can be anticipated and hence, preconcieved and implemented inside an applications.  Read on for more information! TM Forum Management World, Nice, France - 18 May 2010 News Facts To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle today introduced the Oracle Communications Data Model. With the Oracle Communications Data Model, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the Oracle Communications Data Model, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Also announced today, Hong Kong Broadband Network enhanced their data warehouse system, going live on Oracle Communications Data Model in three months. The leading provider increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Product Details Oracle Communications Data Model provides industry-specific schema and embedded analytics that address key areas such as customer management, marketing segmentation, product development and network health. CSPs can efficiently capture and monitor critical data and transform it into actionable information to support development and delivery of next-generation services using: More than 1,300 industry-specific measurements and key performance indicators (KPIs) such as network reliability statistics, provisioning metrics and customer churn propensity. Embedded OLAP cubes for extremely fast dimensional analysis of business information. Embedded data mining models for sophisticated trending and predictive analysis. Support for multiple lines of business, such as cable, mobile, wireline and Internet, which can be easily extended to support future requirements. With Oracle Communications Data Model, CSPs can jump start the implementation of a communications data warehouse in line with communications-industry standards including the TM Forum Information Framework (SID), formerly known as the Shared Information Model. Oracle Communications Data Model is optimized for any Oracle Database 11g platform, including Oracle Exadata, which can improve call data record query performance by 10x or more. Supporting Quotes "Oracle Communications Data Model covers a wide range of business areas that are relevant to modern communications service providers and is a comprehensive solution - with its data model and pre-packaged templates including BI dashboards, KPIs, OLAP cubes and mining models. It helps us save a great deal of time in building and implementing a customized data warehouse and enables us to leverage the advanced analytics quickly and more effectively," said Yasuki Hayashi, executive manager, NTT Comware Corporation. "Data volumes will only continue to grow as communications service providers expand next-generation networks, deploy new services and adopt new business models. They will increasingly need efficient, reliable data warehouses to capture key insights on data such as customer value, network value and churn probability. With the Oracle Communications Data Model, Oracle has demonstrated its commitment to meeting these needs by delivering data warehouse tools designed to fill communications industry-specific needs," said Elisabeth Rainge, program director, Network Software, IDC. "The TM Forum Conformance Mark provides reassurance to customers seeking standards-based, and therefore, cost-effective and flexible solutions. TM Forum is extremely pleased to work with Oracle to certify its Oracle Communications Data Model solution. Upon successful completion, this certification will represent the broadest and most complete implementation of the TM Forum Information Framework to date, with more than 130 aggregate business entities," said Keith Willetts, chairman and chief executive officer, TM Forum. Supporting Resources Oracle Communications Oracle Communications Data Model Data Sheet Oracle Communications Data Model Podcast Oracle Data Warehousing Oracle Communications on YouTube Oracle Communications on Delicious Oracle Communications on Facebook Oracle Communications on Twitter Oracle Communications on LinkedIn Oracle Database on Twitter The Data Warehouse Insider Blog

    Read the article

  • OWB 11gR2 &ndash; OLAP and Simba

    - by David Allan
    Oracle Warehouse Builder was the first ETL product to provide a single integrated and complete environment for managing enterprise data warehouse solutions that also incorporate multi-dimensional schemas. The OWB 11gR2 release provides Oracle OLAP 11g deployment for multi-dimensional models (in addition to support for prior releases of OLAP). This means users can easily utilize Simba's MDX Provider for Oracle OLAP (see here for details and cost) which allows you to use the powerful and popular ad hoc query and analysis capabilities of Microsoft Excel PivotTables® and PivotCharts® with your Oracle OLAP business intelligence data. The extensions to the dimensional modeling capabilities have been built on established relational concepts, with the option to seamlessly move from a relational deployment model to a multi-dimensional model at the click of a button. This now means that ETL designers can logically model a complete data warehouse solution using one single tool and control the physical implementation of a logical model at deployment time. As a result data warehouse projects that need to provide a multi-dimensional model as part of the overall solution can be designed and implemented faster and more efficiently. Wizards for dimensions and cubes let you quickly build dimensional models and realize either relationally or as an Oracle database OLAP implementation, both 10g and 11g formats are supported based on a configuration option. The wizard provides a good first cut definition and the objects can be further refined in the editor. Both wizards let you choose the implementation, to deploy to OLAP in the database select MOLAP: multidimensional storage. You will then be asked what levels and attributes are to be defined, by default the wizard creates a level bases hierarchy, parent child hierarchies can be defined in the editor. Once the dimension or cube has been designed there are special mapping operators that make it easy to load data into the objects, below we load a constant value for the total level and the other levels from a source table.   Again when the cube is defined using the wizard we can edit the cube and define a number of analytic calculations by using the 'generate calculated measures' option on the measures panel. This lets you very easily add a lot of rich analytic measures to your cube. For example one of the measures is the percentage difference from a year ago which we can see in detail below. You can also add your own custom calculations to leverage the capabilities of the Oracle OLAP option, either by selecting existing template types such as moving averages to defining true custom expressions. The 11g OLAP option now supports percentage based summarization (the amount of data to precompute and store), this is available from the option 'cost based aggregation' in the cube's configuration. Ensure all measure-dimensions level based aggregation is switched off (on the cube-dimension panel) - previously level based aggregation was the only option. The 11g generated code now uses the new unified API as you see below, to generate the code, OWB needs a valid connection to a real schema, this was not needed before 11gR2 and is a new requirement since the OLAP API which OWB uses is not an offline one. Once all of the objects are deployed and the maps executed then we get to the fun stuff! How can we analyze the data? One option which is powerful and at many users' fingertips is using Microsoft Excel PivotTables® and PivotCharts®, which can be used with your Oracle OLAP business intelligence data by utilizing Simba's MDX Provider for Oracle OLAP (see Simba site for details of cost). I'll leave the exotic reporting illustrations to the experts (see Bud's demonstration here), but with Simba's MDX Provider for Oracle OLAP its very simple to easily access the analytics stored in the database (all built and loaded via the OWB 11gR2 release) and get the regular features of Excel at your fingertips such as using the conditional formatting features for example. That's a very quick run through of the OWB 11gR2 with respect to Oracle 11g OLAP integration and the reporting using Simba's MDX Provider for Oracle OLAP. Not a deep-dive in any way but a quick overview to illustrate the design capabilities and integrations possible.

    Read the article

  • Amazon Product Advertising API SOAP Namespace Changes

    - by Rick Strahl
    About two months ago (twowards the end of February 2012 I think) Amazon decided to change the namespace of the Product Advertising API. The error that would come up was: <ItemSearchResponse > was not expected. If you've used the Amazon Product Advertising API you probably know that Amazon has made it a habit to break the services every few years or so and I guess last month was about the time for another one. Basically the service namespace of the document has been changed and responses from the service just failed outright even though the rest of the schema looks fine. Now I looked around for a while trying to find a recent update to the Product Advertising API - something semi-official looking but everything is dated around 2009. Really??? And it's not just .NET - the newest thing on the sample/APIs is dated early 2011 and a handful of 2010 samples. There are newer full APIs for the 'cloud' offerings, but the Product Advertising API apparently isn't part of that. After searching for quite a bit trying to trace this down myself and trying some of the newer samples (which also failed) I found an obscure forum post that describes the solution of getting past the namespace issue. FWIW, I've been using an old version of the Product Advertising API using the old Microsoft WSE3 services (pre-WCF), which provides some of the WS* security features required by the Amazon service. The fix for this code is to explicitly override the namespace declaration on each of the imported service method signatures. The old service namespace (at least on my build) was: http://webservices.amazon.com/AWSECommerceService/2009-03-31 and it should be changed to: http://webservices.amazon.com/AWSECommerceService/2011-08-01 Change it on the class header:[Microsoft.Web.Services3.Messaging.SoapService("http://webservices.amazon.com/AWSECommerceService/2011-08-01")] [System.Xml.Serialization.XmlIncludeAttribute(typeof(Property[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(BrowseNode[]))] [System.Xml.Serialization.XmlIncludeAttribute(typeof(TransactionItem[]))] public partial class AWSECommerceService : Microsoft.Web.Services3.Messaging.SoapClient { and on all method signatures:[Microsoft.Web.Services3.Messaging.SoapMethodAttribute("http://soap.amazon.com/ItemSearch")] [return: System.Xml.Serialization.XmlElementAttribute("ItemSearchResponse", Namespace="http://webservices.amazon.com/AWSECommerceService/2011-08-01")] public ItemSearchResponse ItemSearch(ItemSearch ItemSearch1) { Microsoft.Web.Services3.SoapEnvelope results = base.SendRequestResponse("ItemSearch", ItemSearch1); return ((ItemSearchResponse)(results.GetBodyObject(typeof(ItemSearchResponse), this.SoapServiceAttribute.TargetNamespace))); } It's easy to do with a Search and Replace on the above strings. Amazon Services <rant> FWIW, I've not been impressed by Amazon's service offerings. While the services work well, their documentation and tool support is absolutely horrendous. I was recently working with a customer on an old AWS application and their old API had been completely removed with a new API that wasn't even a close match. One old API call resulted in requiring three different APIs to perform the same functionality. We had to re-write the entire piece from scratch essentially. The documentation was downright wrong, and incomplete and so scattered it was next to impossible to follow. The examples weren't examples at all - they're mockups of real service calls with fake data that didn't even provide everything that was required to make same service calls work. Additionally there appears to be just about no public support from Amazon, only peer support which is sparse at best - and getting a hold of somebody at Amazon, even for pay seems to be mythical task. It's a terrible business model they have going. I can't see why anybody would put themselves through this sort of customer and development experience. Sad really, but an experience we see more and more these days. Nobody puts in the time to document anything anymore, leaving it to devs to figure this stuff out over and over again… </rant>© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Few events I&rsquo;m speaking at in early 2013

    - by Mladen Prajdic
    2013 has started great and the SQL community is already brimming with events. At some of these events you can come say hi. I’ll be glad you do! These are the events with dates and locations that I know I’ll be speaking at so far.   February 16th: SQL Saturday #198 - Vancouver, Canada The session I’ll present in Vancouver is SQL Impossible: Restoring/Undeleting a table Yes, you read the title right. No, it's not about the usual "one table per partition" and "restore full backup then copy the data over" methods. No, there are no 3rd party tools involved. Just you and your SQL Server. Yes, it's crazy. No, it's not for production purposes. And yes, that's why it's so much fun. Prepare to dive into the world of data pages, log records, deletes, truncates and backups and how it all works together to get your table back from the endless void. Want to know more? Come and see! This is an advanced level session where we’ll dive into the internals of data pages, transaction log records and page restores.   March 8th-9th: SQL Saturday #194 - Exeter, UK In Exeter I’ll be presenting twice. On the first day I’ll have a full day precon titled: From SQL Traces to Extended Events - The next big switch This pre-con will give you insight into both of the current tracing technologies in SQL Server. The old SQL Trace which has served us well over the past 10 or so years is on its way out because the overhead and details it produces are no longer enough to deal with today's loads. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. The new Extended Events are a new lightweight tracing mechanism built directly into the SQLOS thus giving us information SQL Trace just couldn't. They were designed and built with performance in mind and it shows. Mastering Extended Events requires learning at least one new skill: XML querying. The second session I’ll have on Saturday titled: SQL Injection from website to SQL Server SQL Injection is still one of the biggest reasons various websites and applications get hacked. The solution as everyone tells us is simple. Use SQL parameters. But is that enough? In this session we'll look at how would an attacker go about using SQL Injection to gain access to your database, see its schema and data, take over the server, upload files and do various other mischief on your domain. This is a fun session that always brings out a few laughs in the audience because they didn’t realize what can be done.   April 23rd-25th: NTK conference - Bled, Slovenia (Slovenian website only) This is a conference with history. This year marks its 18th year running. It’s a relatively large IT conference that focuses on various Microsoft technologies like .Net, Azure, SQL Server, Exchange, Security, etc… The main session’s language is Slovenian but this is slowly changing so it’s becoming more interesting for foreign attendees. This year it’s happening in the beautiful town of Bled in the Alps. The scenery alone is worth the visit, wouldn’t you agree? And this year there are quite a few well known speakers present! Session title isn’t known yet.       May 2nd-4th: SQL Bits XI – Nottingham, UK SQL Bits is the largest SQL Server conference in Europe. It’s a 3 day conference with top speakers and content all dedicated to SQL Server. The session I’ll present here is an hour long version of the precon I’ll give in Exeter. From SQL Traces to Extended Events - The next big switch The session description is the same as for the Exeter precon but we'll focus more on how the Extended Events work with only a brief overview of old SQL Trace architecture.

    Read the article

  • Gnome Shell Theme Problem on Ubuntu 11.10

    - by Khurram Majeed
    I am trying to install ANewStart GNOME shell themes on Ubuntu 11.10. I have installed gnome shell extension for themes: sudo add-apt-repository ppa:webupd8team/gnome3 sudo apt-get update sudo apt-get install gnome-shell-extensions-user-theme I got the instructions from here ANewStart GNOME Shell Theme + AwOken Icons Theme = Pure Art. But when I go to "Advanced Settings - Shell Extensions" its empty... There is nothing. Also there is a orange triangle sign next to Shell Theme drop down in Advanced Settings - Theme. When I try to run the gnome-tweak-tool from terminal I get following error: imresh@imresh-laptop:~$ gnome-tweak-tool CRITICAL: Error parsing schema org.gnome.shell (/usr/share/glib-2.0/schemas/org.gnome.shell.gschema.xml) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/gsettings.py", line 45, in __init__ summary = key.getElementsByTagName("summary")[0].childNodes[0].data IndexError: list index out of range WARNING : Error detecting shell Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell_extensions.py", line 145, in __init__ shell = GnomeShellFactory().get_shell() File "/usr/lib/python2.7/dist-packages/gtweak/utils.py", line 38, in getinstance instances[cls] = cls() File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 123, in __init__ v = map(int,proxy.version.split(".")) File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 46, in version return json.loads(self.execute_js('const Config = imports.misc.config; Config.PACKAGE_VERSION')) File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 39, in execute_js result, output = self.proxy.Eval('(s)', js) File "/usr/lib/python2.7/dist-packages/gi/overrides/Gio.py", line 148, in __call__ kwargs.get('flags', 0), kwargs.get('timeout', -1), None) File "/usr/lib/python2.7/dist-packages/gi/types.py", line 43, in function return info.invoke(*args, **kwargs) GError: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.Shell was not provided by any .service files WARNING : Shell not running Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell.py", line 57, in __init__ self._shell = GnomeShellFactory().get_shell() File "/usr/lib/python2.7/dist-packages/gtweak/utils.py", line 38, in getinstance instances[cls] = cls() File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 123, in __init__ v = map(int,proxy.version.split(".")) File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 46, in version return json.loads(self.execute_js('const Config = imports.misc.config; Config.PACKAGE_VERSION')) File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 39, in execute_js result, output = self.proxy.Eval('(s)', js) File "/usr/lib/python2.7/dist-packages/gi/overrides/Gio.py", line 148, in __call__ kwargs.get('flags', 0), kwargs.get('timeout', -1), None) File "/usr/lib/python2.7/dist-packages/gi/types.py", line 43, in function return info.invoke(*args, **kwargs) GError: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.Shell was not provided by any .service files WARNING : Could not list shell extensions Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell.py", line 62, in __init__ extensions = self._shell.list_extensions() AttributeError: ShellThemeTweak instance has no attribute '_shell' (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed Please help me in fixing this. I have also restarted the computer many times it does not make a difference.

    Read the article

  • Silverlight Cream for February 07, 2011 -- #1043

    - by Dave Campbell
    In this Issue: Roy Dallal, Kevin Dockx, Gill Cleeren, Oren Gal, Colin Eberhardt, Rudi Grobler, Jesse Liberty, Shawn Wildermuth, Kirupa Chinnathambi, Jeremy Likness, Martin Krüger(-2-), Beth Massi, and Michael Crump. Above the Fold: Silverlight: "A Circular ProgressBar Style using an Attached ViewModel" Colin Eberhardt WP7: "Isolated Storage" Jesse Liberty Lightswitch: "How To Create Outlook Appointments from a LightSwitch Application" Beth Massi Shoutouts: Gergely Orosz has a summary of his 4-part series on Styles in Silverlight: Everything a Developer Needs To Know From SilverlightCream.com: Silverlight Memory Leak, Part 2 Roy Dallal has part 2 of his memory leak posts up... and discusses the results of runnin VMMap and some hints on how to make best use of it. Using a Channel Factory in Silverlight (instead of adding a Service Reference). With cows. Kevin Dockx has a post up for those of you that don't like the generated code that comes about when adding a service reference, and the answer is a Channel Factory... and he has an example app in the post that populates a list of cows... honest ... check it out. Getting ready for Microsoft Silverlight Exam 70-506 (Part 4) Gill Cleeren has Part 4 of his deep-dive into studying for the Silverlight Certification exam. This time out he's got probably half a gazillion links for working with data... seriously! Sync unlimited instances of one Silverlight application How about a cross-browser sync of an unlimited number of instances of the same Silverlight app... Oren Gal has just that going on, and discusses his first two attempts and how he finally honed in on the solution. A Circular ProgressBar Style using an Attached ViewModel Wow... check out what Colin Eberhardt's done with the "Progress Bar" ... using an Attached View Model which he discussed in a post a while back... these are awesome! WP7 - Professional Audio Recorder Rudi Grobler discusses an audio recorder for WP7 that uses the NAudio audio library for not only the recording but visualization. Isolated Storage Jesse Liberty's got his 30th 'From Scratch' post up and this time he's talking about Isolated Storage. Learning OData? MSDN and Shawn Wildermuth has the videos for you! Shawn Wildermuth produced a couple series of videos for MSDN on OData: Getting Started and Consuming OData... get the link on Shawn's post. Creating Sample Data from a Class - Page 1 Kirupa Chinnathambi shows us how to use a schema of your own design in Blend... yet still have Blend produce sample data A Pivot-Style Data Grid without the DataGrid Jeremy Likness discusses the lack of an open-source grid with dynamic columns ... let him know if you've done one! ... and then he continues on to demonstrate his build-out of the same. Synchronize a freeform drawing and a real path creation Martin Krüger has a few new samples up in the Expression Gallery. This first is taking mouse movement in an InkPresenter and creating path statements from it in a canvas and playing them back. How to: use Storyboard completed behaviors Martin Krüger's next post is about Storyboards and firing one off the end of another, in Blend... so he ended up producing a behavior for doing that... and it's in the Expression Gallery How To Create Outlook Appointments from a LightSwitch Application Beth Massi has a new Lightswitch post up... her previous was email from Lightswitch... this is Outlook appointments... pretty darn cool. Quick run through of the WP7 Developer Tools January 2011 Michael Crump has a really good Quick look at the new WP7 Dev Tools that were released last week posted on his blog Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Big Data – Data Mining with Hive – What is Hive? – What is HiveQL (HQL)? – Day 15 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the operational database in Big Data Story. In this article we will understand what is Hive and HQL in Big Data Story. Yahoo started working on PIG (we will understand that in the next blog post) for their application deployment on Hadoop. The goal of Yahoo to manage their unstructured data. Similarly Facebook started deploying their warehouse solutions on Hadoop which has resulted in HIVE. The reason for going with HIVE is because the traditional warehousing solutions are getting very expensive. What is HIVE? Hive is a datawarehouseing infrastructure for Hadoop. The primary responsibility is to provide data summarization, query and analysis. It  supports analysis of large datasets stored in Hadoop’s HDFS as well as on the Amazon S3 filesystem. The best part of HIVE is that it supports SQL-Like access to structured data which is known as HiveQL (or HQL) as well as big data analysis with the help of MapReduce. Hive is not built to get a quick response to queries but it it is built for data mining applications. Data mining applications can take from several minutes to several hours to analysis the data and HIVE is primarily used there. HIVE Organization The data are organized in three different formats in HIVE. Tables: They are very similar to RDBMS tables and contains rows and tables. Hive is just layered over the Hadoop File System (HDFS), hence tables are directly mapped to directories of the filesystems. It also supports tables stored in other native file systems. Partitions: Hive tables can have more than one partition. They are mapped to subdirectories and file systems as well. Buckets: In Hive data may be divided into buckets. Buckets are stored as files in partition in the underlying file system. Hive also has metastore which stores all the metadata. It is a relational database containing various information related to Hive Schema (column types, owners, key-value data, statistics etc.). We can use MySQL database over here. What is HiveSQL (HQL)? Hive query language provides the basic SQL like operations. Here are few of the tasks which HQL can do easily. Create and manage tables and partitions Support various Relational, Arithmetic and Logical Operators Evaluate functions Download the contents of a table to a local directory or result of queries to HDFS directory Here is the example of the HQL Query: SELECT upper(name), salesprice FROM sales; SELECT category, count(1) FROM products GROUP BY category; When you look at the above query, you can see they are very similar to SQL like queries. Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Pig. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Entity Framework - Single EMDX Mapping Multiple Database

    - by michaelalisonalviar
    Because of my recent craze on Entity Framework thanks to Sir Humprey, I have continuously searched the Internet for tutorials on how to apply it to our current system. So I've come to learn that with EF, I can eliminate the numerous coding of methods/functions for CRUD operations, my overly used assigning of connection strings, Data Adapters or Data Readers as Entity Framework will map my desired database and will do its magic to create entities for each table I want (using EF Powertool) and does all the methods/functions for my Crud Operations. But as I begin applying it to a new project I was assigned to, I realized our current server is designed to contain each similar entities in different databases. For example Our lookup tables are stored in LookupDb, Accounting-related tables are in AccountingDb, Sales-related tables in SalesDb. My dilemma is I have to use an existing table from LookupDB and use it as a look-up for my new table. Then I have found Miss Rachel's Blog (here)Thank You Miss Rachel!  which enables me to let EF think that my TableLookup1 is in the AccountingDB using the following steps. Im on VS 2010, I am using C# , Using Entity Framework 5, SQL Server 2008 as our DB ServerStep 1:Creating A SQL Synonym. If you want a more detailed discussion on synonyms, this was what i have read -> (link here). To simply put it, A synonym enabled me to simplify my query for the Look-up table when I'm using the AccountingDB fromSELECT [columns] FROM LookupDB.dbo.TableLookup1toSELECT [columns] FROM TableLookup1Syntax: CREATE SYNONYM  TableLookup1(1) FOR LookupDB.dbo.TableLookup1 (2)1. What you want to call the table on your other DB2. DataBaseName.schema.TableNameStep 2: We will now follow Miss Rachel's steps. you can either visit the link on the original topic I posted earlier or just follow the step I made.1. I created a Visual Basic Solution that will contain the 4 projects needed to complete the merging2. First project will contain the edmx file pointing to the AccountingDB3. Second project will contain the edmx file pointing to the LookupDB4. Third Project will will be our repository of the merged edmx file. Create an edmx file pointing To AccountingDB as this the database that we created the Synonym on.Reminder: Aside from using the same name for the Entities, please make sure that you have the same Model Namespace for all your Entities  5. Fourth project that will contain the beautiful EDMX merger that Miss Rachel created that will free you from Hard coding of the merge/recoding the Edmx File of the third project everytime a change is done on either one of the first two projects' Edmx File.6. Run the solution, but make sure that on the solutions properties Single startup project is selected and the project containing the EDMX merger is selected.7. After running the solution, double click on the EDMX file of the 3rd project and set Lazy Loading Enabled = False. This will let you use the tables/entities that you see in that EDMX File.8. Feel free to do your CRUD Operations.I don't know if EF 5 already has a feature to support synonyms as I am still a newbie on that aspect but I have seen a linked where there are supposed suggestions on Entity Framework upgrades and one is the "Support for multiple databases"  So that's it! Thanks for reading!

    Read the article

  • Oracle and Partners release CAMP specification for PaaS Management

    - by macoracle
    Cloud Application Management for Platforms The public release of the Cloud Application Management for Platforms (CAMP) specification, an initial draft of what is expected to become an industry standard self service interface specification for Platform as a Service (PaaS) management, represents a significant milestone in cloud standards development. Created by several players in the emerging cloud industry, including Oracle, the specification is being submitted to the OASIS standards organization (draft charter) where it will be finalized in an open development process. CAMP is targeted at application developers and deployers for self service management of their application on a Platform-as-a-Service cloud. It is closely aligned with the application development process where applications are typically developed in an Application Development Environment (ADE) and then deployed into a private or public platform cloud. CAMP standardizes the model behind an application’s dependencies on platform components and provides a standardized format for moving applications between the ADE and the cloud, and if and when desirable, between clouds. Once an application is deployed, CAMP provides users with a standardized self service interface to the PaaS offering, allowing the cloud consumer to manage the lifecycle of the application on that platform and the use of the underlying platform services. The CAMP interface includes a RESTful binding of the CAMP model onto the standard HTTP protocol, using JSON as the encoding for the model resources. The model for CAMP includes resources that represent the Application, its Components and any Platform Components that they depend on. It's important PaaS Cloud consumers understand that for a PaaS cloud, these are the abstractions that the user would prefer to work with, not Virtual Machines and the various resources such as compute power, storage and networking. PaaS cloud consumers would also not like to become system administrators for the infrastructure that is hosting their applications and component services. CAMP works on this more abstract level, and yet still accommodates platforms that are built using an underlying infrastructure cloud. With CAMP, it is up to the cloud provider whether or not this underlying infrastructure is exposed to the consumer. One major challenge addressed by the CAMP specification is that of ensuring that application deployment on a new platform is as seamless and error free as possible. This becomes even more difficult when the application may have been developed for a different platform and is now moving to a new one. In CAMP this is accomplished by matching the requirements of the application and its components to the specific capabilities of the underlying platform. This needs to be done regardless of whether there are existing pools of virtualized platform resources (such as a database pool) which are provisioned(on the basis of a schema for example), or whether the platform component is really just a set of virtual machines drawn from an infrastructure pool. The interoperability between platform clouds that CAMP offers means that a CAMP client such as an ADE can target multiple clouds with a single common interface. Applications can even be spread across multiple platform clouds and then managed without needing to create a specialized adapter to manage the components running in each cloud. The development of CAMP has been an effort by a small set of companies, but there are significant advantages to this approach. For example, the way that each of these companies creates their platforms is different enough, to ensure that CAMP can cover a wide range of actual deployments. CAMP is now entering the next phase of development under the guidance of an open standards organization, OASIS, which will likely broaden it’s capabilities. We hope is to keep it concise and minimal, however, to ease implementation and adoption. Over time there will be many different types of platform components that applications can use and which need management. CAMP at this point only includes one example of this (in an appendix) – DataBase as a Service. I am looking forward to the start of the CAMP Technical Committee in OASIS and will do my best to ensure a successful development process. Hope to see you there.

    Read the article

  • Access Denied

    - by Tony Davis
    When Microsoft executives wake up in the night screaming, I suspect they are having a nightmare about their own version of Frankenstein's monster. Created with the best of intentions, without thinking too hard of the long-term strategy, and having long outlived its usefulness, the monster still lives on, occasionally wreaking vengeance on the innocent. Its name is Access; a living synthesis of disparate body parts that is resistant to all attempts at a mercy-killing. In 1986, Microsoft had no database products, and needed one for their new OS/2 operating system, the successor to MSDOS. In 1986, they bought exclusive rights to Sybase DataServer, and were also intent on developing a desktop database to capture Ashton-Tate's dominance of that market, with dbase. This project, first called 'Omega' and later 'Cirrus', eventually spawned two products: Visual Basic in 1991 and Access in late 1992. Whereas Visual Basic battled with PowerBuilder for dominance in the client-server market, Access easily won the desktop database battle, with Dbase III and DataEase falling away. Access did an excellent job of abstracting and simplifying the task of building small database applications in a short amount of time, for a small number of departmental users, and often for a transient requirement. There is an excellent front end and forms generator. We not only see it in Access but parts of it also reappear in SSMS. It's good. A business user can pull together useful reports, without relying on extensive technical support. A skilled Access programmer can deliver a fairly sophisticated application, whilst the traditional client-server programmer is still sharpening his pencil. Even for the SQL Server programmer, the forms generator of Access is useful for sketching out application designs. So far, so good, but here's where the problems start; Access ties together two different products and the backend of Access is the bugbear. The limitations of Jet/ACE are well-known and documented. They range from MDB files that are prone to corruption, especially as they grow in size, pathetic security, and "copy and paste" Backups. The biggest problem though, was an infamous lack of scalability. Because Microsoft never realized how long the product would last, they put little energy into improving the beast. Microsoft 'ate their own dog food' by using Access for Microsoft Exchange and Outlook. They choked on it. For years, scalability and performance problems with Exchange Server have been laid at the door of the Jet Blue engine on which it relies. Substantial development work in Exchange 2010 was required, just in order to improve the engine and storage schema so that it more efficiently handled the reading and writing of mails. The alternative of using SQL Server just never panned out. The Jet engine was designed to limit concurrent users to a small number (10-20). When Access applications outgrew this, bitter experience proved that there really is no easy upgrade path from Access to SQL Server, beyond rewriting the whole lot from scratch. The various initiatives to do this never quite bridged the cultural gulf between Access and a true relational database So, what are the obvious alternatives for small, strategic database applications? I know many users who, for simple 'list maintenance' requirements are very happy using Excel databases. Surely, now that PowerPivot has led the way, it is time for Microsoft to offer a new RAD package for database application development; namely an Excel-based front end for SQL Server Express. In that way, we'll have a powerful and familiar front end, to a scalable database, and a clear upgrade path when an app takes off and needs to go enterprise. Cheers, Tony.

    Read the article

  • Unit Testing Framework for XQuery

    - by Knut Vatsendvik
    This posting provides a unit testing framework for XQuery using Oracle Service Bus. It allows you to write a test case to run your XQuery transformations in an automated fashion. When the test case is run, the framework returns any differences found in the response. The complete code sample with install instructions can be downloaded from here. Writing a Unit Test You start a new Test Case by creating a Proxy Service from Workshop that comes with Oracle Service Bus. In the General Configuration page select Service Type to be Messaging Service           In the Message Type Configuration page link both the Request & Response Message Type to the TestCase element of the UnitTest.xsd schema                 The TestCase element consists of the following child elements The ID and optional Name element is simply used for reference. The Transformation element is the XQuery resource to be executed. The Input elements represents the input to run the XQuery with. The Output element represents the expected output. These XML documents are “also” represented as an XQuery resource where the XQuery function takes no arguments and returns the XML document. Why not pass the test data with the TestCase? Passing an XML structure in another XML structure is not very easy or at least not very human readable. Therefore it was chosen to represent the test data as an loadable resource in the OSB. However you are free to go ahead with another approach on this if wanted. The XMLDiff elements represents any differences found. A sample on input is shown here. Modeling the Message Flow Then the next step is to model the message flow of the Proxy Service. In the Request Pipeline create a stage node that loads the test case input data.      For this, specify a dynamic XQuery expression that evaluates at runtime to the name of a pre-registered XQuery resource. The expression is of course set by the input data from the test case.           Add a Run stage node. Assign the result of the XQuery, that is to be run, to a context variable. Define a mapping for each of the input variables added in previous stage.     Add a Compare stage. Like with the input data, load the expected output data. Do a compare using XMLDiff XQuery provided where the first argument is the loaded output test data, and the second argument the result from the Run stage. Any differences found is replaced back into the test case XMLDiff element. In case of any unexpected failure while processing, add an Error Handler to the Pipeline to capture the fault. To pass back the result add the following Insert action In the Response Pipeline. A sample on output is shown here.

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >