Search Results

Search found 6814 results on 273 pages for 'payment processing'.

Page 112/273 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • WYSIHAT 'resonds_to_parent" undefined method - Ruby on Rails

    - by bgadoci
    I just successfully installed WysiHat in my rails blog. Seems that the 'add a picture' feature is not working. It successfully allows me to find and select a picture from my desktop but upon clicking save, it does nothing. I also have Paperclip successfully installed and can attach images to records outside the WYSIHAT form field. Any ideas? (let me know if I need to post any code). Also, WysiHat-engine uses facebox, not sure if that is relevant. UPDATE: Added Server Log, looks like paperclip is saving it so not sure what else is going wrong. Processing PostsController#update (for 127.0.0.1 at 2010-04-23 16:42:14) [PUT] Parameters: {"commit"=>"Update", "post"=>{"body"=>"<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.</p>", "title"=>"Rails Code for Search"}, "authenticity_token"=>"hndm6pxaPLfgnSMFAmLDGNo86mZG3XnlfJoNOI/P+O8=", "id"=>"105"} Post Load (0.2ms) SELECT * FROM "posts" WHERE ("posts"."id" = 105) Post Update (0.3ms) UPDATE "posts" SET "updated_at" = '2010-04-23 21:42:14', "body" = '<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.</p>' WHERE "id" = 105 [paperclip] Saving attachments. Redirected to http://localhost:3000/posts/105 Completed in 12ms (DB: 0) | 302 Found [http://localhost/posts/105] UPDATE 2 I installed ImageMagic and now I get the following error. Processing WysihatFilesController#index (for 127.0.0.1 at 2010-04-23 23:27:57) [GET] Parameters: {"editor"=>"post_body_editor"} WysihatFile Load (0.3ms) SELECT * FROM "wysihat_files" Rendering wysihat_files/index Rendered wysihat_files/_form (1.9ms) Completed in 4ms (View: 3, DB: 0) | 200 OK [http://localhost/wysihat_files/?editor=post_body_editor] Processing WysihatFilesController#create (for 127.0.0.1 at 2010-04-23 23:28:09) [POST] Parameters: {"commit"=>"Save changes", "wysihat_file"=>{"file"=>#<File:/var/folders/F3/F3ovLEb1EMW4aZ5nsRvRlU+++TI/-Tmp-/RackMultipart20100423-43326-1mzeb3s-0>}, "authenticity_token"=>"IHF9Ghz6gYuAeNOUYhna+O0A4WrDbm4iha4Tsavu97o="} NoMethodError (undefined method `responds_to_parent' for #<WysihatFilesController:0x10352a2c0>): vendor/gems/wysihat-engine-0.1.12/app/controllers/wysihat_files_controller.rb:10:in `create' Rendered rescues/_trace (25.2ms) Rendered rescues/_request_and_response (0.3ms) Rendering rescues/layout (internal_server_error) Update 3 After reading a comment below I am thinking that perhaps I am missing something in my Post model. Here is the code for the model. class Post < ActiveRecord::Base has_attached_file :photo validates_presence_of :body, :title has_many :comments, :dependent => :destroy has_many :tags, :dependent => :destroy has_many :ugtags, :dependent => :destroy has_many :votes, :dependent => :destroy belongs_to :user after_create :self_vote def self_vote # I am assuming you have a user_id field in `posts` and `votes` table. self.votes.create(:user => self.user) end cattr_reader :per_page @@per_page = 10 end

    Read the article

  • How to use JAXB to process messages from two separate schemas (with same rootelement name)

    - by sairn
    Hi We have a client that is sending xml messages that are produced from two separate schemas. We would like to process them in a single application, since the content is related. We cannot modify the client's schema that they are using to produce the XML messages... We can modify our own copies of the two schema (or binding.jxb) -- if it helps -- in order to enable our JAXB processing of messages created from the two separate schemas. Unfortunately, both schemas have the same root element name (see below). QUESTION: Does JAXB prohibit absolutely the processing two schemas that have the same root element name? -- If so, I will stop my "easter egg" hunt for a solution to this... ---Or, is there some workaround that would enable us to use JAXB for processing these XML messages produced from two different schemas? schema1 (note the root element name: "A"): <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xsd:element name="A"> <xsd:complexType> <xsd:sequence> <xsd:element name="AA"> <xsd:complexType> <xsd:sequence> <xsd:element name="AAA1" type="xsd:string" /> <xsd:element name="AAA2" type="xsd:string" /> <xsd:element name="AAA3" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="BB"> <xsd:complexType> <xsd:sequence> <xsd:element name="BBB1" type="xsd:string" /> <xsd:element name="BBB2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema> schema2 (note, again, using the same root element name: "A") <?xml version="1.0" encoding="UTF-8"?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xsd:element name="A"> <xsd:complexType> <xsd:sequence> <xsd:element name="CCC"> <xsd:complexType> <xsd:sequence> <xsd:element name="DDD1" type="xsd:string" /> <xsd:element name="DDD2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="EEE"> <xsd:complexType> <xsd:sequence> <xsd:element name="EEE1"> <xsd:complexType> <xsd:sequence> <xsd:element name="FFF1" type="xsd:string" /> <xsd:element name="FFF2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:element name="EEE2" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema>

    Read the article

  • Function for counting characters/words not working

    - by user1742729
    <!DOCTYPE HTML> <html> <head> <title> Javascript - stuff </title> <script type="text/javascript"> <!-- function GetCountsAll( Wordcount, Sentancecount, Clausecount, Charactercount ) { var TextString = document.getElementById("Text").innerHTML; var Wordcount = 0; var Sentancecount = 0; var Clausecount = 0; var Charactercount = 0; // For loop that runs through all characters incrementing the variable(s) value each iteration for (i=0; i < TextString.length; i++); if (TextString.charAt(i) == " " = true) Wordcount++; return Wordcount; if (TextString.charAt(i) = "." = true) Sentancecount++; Clausecount++; return Sentancecount; if (TextString.charAt(i) = ";" = true) Clausecount++; return Clausecount; } --> </script> </head> <body> <div id="Text"> It is important to remember that XHTML is a markup language; it is not a programming language. The language only describes the placement and visual appearance of elements arranged on a page; it does not permit users to manipulate these elements to change their placement or appearance, or to perform any "processing" on the text or graphics to change their content in response to user needs. For many Web pages this lack of processing capability is not a great drawback; the pages are simply displays of static, unchanging, information for which no manipulation by the user is required. Still, there are cases where the ability to respond to user actions and the availability of processing methods can be a great asset. This is where JavaScript enters the picture. </div> <input type = "button" value = "Get Counts" class = "btnstyle" onclick = "GetCountsAll()"/> <br/> <span id= "Charactercount"> </span> Characters <br/> <span id= "Wordcount"> </span> Words <br/> <span id= "Sentancecount"> </span> Sentences <br/> <span id= "ClauseCount"> </span> Clauses <br/> </body> </html> I am a student and still learning JavaScript, so excuse any horrible mistakes. The script is meant to calculate the number of characters, words, sentences, and clauses in the passage. It's, plainly put, just not working. I have tried a multitude of things to get it to work for me and have gotten a plethora of different errors but no matter what I can NOT get this to work. Please help! (btw i know i misspelled sentence)

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • In Which We Demystify A Few Docupresentment Settings And Learn the Ethos of the Author

    - by Andy Little
    It's no secret that Docupresentment (part of the Oracle Documaker suite) is powerful tool for integrating on-demand and interactive applications for publishing with the Oracle Documaker framework.  It's also no secret there are are many details with respect to the configuration of Docupresentment that can elude even the most erudite of of techies.  To be sure, Docupresentment will work for you right out of the box, and in most cases will suit your needs without toying with a configuration file.  But, where's the adventure in that?   With this inaugural post to That's The Way, I'm going to introduce myself, and what my aim is with this blog.  If you didn't figure it out already by checking out my profile, my name is Andy and I've been with Oracle (nee Skywire Software nee Docucorp nee Formmaker) since the formative years of 1998.  Strangely, it doesn't seem that long ago, but it's certainly a lifetime in the age of technology.  I recall running a BBS from my parent's basement on a 1200 baud modem, and the trepidation and sweaty-palmed excitement of upgrading to the power and speed of 2400 baud!  Fine, I'll admit that perhaps I'm inflating the experience a bit, but I was kid!  This is the stuff of War Games and King's Quest I and the demise of TI-99 4/A.  Exciting times.  So fast-forward a bit and I'm 12 years into a career in the world of document automation and publishing working for the best (IMHO) software company on the planet.  With That's The Way I hope to shed a little light and peek under the covers of some of the more interesting aspects of implementations involving the tech space within the Oracle Insurance Global Business Unit (IGBU), which includes Oracle Documaker, Rating & Underwriting, and Policy Administration to name a few.  I may delve off course a bit, and you'll likely get a dose of humor (at least in my mind) but I hope you'll glean at least a tidbit of usefulness with each post.  Feel free to comment as I'm a fairly conversant guy and happy to talk -- it's stopping the talking that's the hard part... So, back to our regularly-scheduled post, already in progress.  By this time you've visited Oracle's E-Delivery site and acquired your properly-licensed version of Oracle Documaker.  Wait -- you didn't find it?  Understandable -- navigating the voluminous download library within Oracle can be a daunting task.  It's pretty simple once you’ve done it a few times.  Login to the e-delivery site, and accept the license terms and restrictions.  Then, you’ll be able to select the Oracle Insurance Applications product pack and your appropriate platform. Click Go and you’ll see a list of applicable products, and you’ll click on Oracle Documaker Media Pack (as I went to press with this article the version is 11.4): Finally, click the Download button next to Docupresentment (again, version at press time is 2.2 p5). This should give you a ZIP file that contains the installation packages for the Docupresentment Server and Client, cryptically named IDSServer22P05W32.exe and IDSClient22P05W32.exe. At this time, I’d like to take a little detour and explain that the world of Oracle, like most technical companies, is rife with acronyms.  One of the reasons Skywire Software was a appealing to Oracle was our use of many acronyms, including the occasional use of multiple acronyms with the same meaning.  I apologize in advance and will try to point these out along the way.  Here’s your first sticky note to go along with that: IDS = Internet Document Server = Docupresentment Once you’ve completed the installation, you’ll have a shiny new Docupresentment server and client, and if you installed the default location it will be living in c:\docserv. Unix users, I’m one of you!  You’ll find it by default in  ~/docupresentment/docserv.  Forging onward with the meat of this post is learning about some special configuration options.  By now you’ve read the documentation included with the download (specifically ids_book.pdf) which goes into some detail of the rubric of the configuration file and in fact there’s even a handy utility that provides an interface to the configuration file (see Running IDSConfig in the documentation).  But who wants to deal with a configuration utility when we have the tools and technology to edit the file <gasp> by hand! I shall now proceed with the standard Information Technology Under the Hood Disclaimer: Please remember to back up any files before you make changes.  I am not responsible for any havoc you may wreak! Go to your installation directory, and locate your docserv.xml file.  Open it in your favorite XML editor.  I happen to be fond of Notepad++ with the XML Tools plugin.  Almost immediately you will behold the splendor of the configuration file.  Just take a moment and let that sink in.  Ok – moving on.  If you reviewed the documentation you know that inside the root <configuration> node there are multiple <section> nodes, each containing a specific group of settings.  Let’s take a look at <section name=”DocumentServer”>: There are a few entries I’d like to discuss.  First, <entry name=”StartCommand”>. This should be pretty self-explanatory; it’s the name of the executable that’s run when you fire up Docupresentment.  Immediately following that is <entry name=”StartArguments”> and as you might imagine these are the arguments passed to the executable.  A few things to point out: The –Dids.configuration=docserv.xml parameter specifies the name of your configuration file. The –Dlogging.configuration=logconf.xml parameter specifies the name of your logging configuration file (this uses log4j so bone up on that before you delve here). The -Djava.endorsed.dirs=lib/endorsed parameter specifies the path where 3rd party Java libraries can be located for use with Docupresentment.  More on that in another post. The <entry name=”Instances”> allows you to specify the number of instances of Docupresentment that will be started.  By default this is two, and generally two instances per CPU is adequate, however you will always need to perform load testing to determine the sweet spot based on your hardware and types of transactions.  You may have many, many more instances than 2. Time for a sidebar on instances.  An instance is nothing more than a separate process of Docupresentment.  The Docupresentment service that you fire up with docserver.bat or docserver.sh actually starts a watchdog process, which is then responsible for starting up the actual Docupresentment processes.  Each of these act independently from one another, so if one crashes, it does not affect any others.  In the case of a crashed process, the watchdog will start up another instance so the number of configured instances are always running.  Bottom line: instance = Docupresentment process. And now, finally, to the settings which gave me pause on an not-too-long-ago implementation!  Docupresentment includes a feature that watches configuration files (such as docserv.xml and logconf.xml) and will automatically restart its instances to load the changes.  You can configure the time that Docupresentment waits to check these files using the setting <entry name=”FileWatchTimeMillis”>.  By default the number is 12000ms, or 12 seconds.  You can save yourself a few CPU cycles by extending this time, or by disabling  the check altogether by setting the value to 0.  This may or may not be appropriate for your environment; if you have 100% uptime requirements then you probably don’t want to bring down an entire set of processes just to accept a new configuration value, so it’s best to leave this somewhere between 12 seconds to a few minutes.  Another point to keep in mind: if you are using Documaker real-time processing under Docupresentment the Master Resource Library (MRL) files and INI options are cached, and if you need to affect a change, you’ll have to “restart” Docupresentment.  Touching the docserv.xml file is an easy way to do this (other methods including using the RSS request, but that’s another post). The next item up: <entry name=”FilePurgeTimeSeconds”>.  You may already know that the Docupresentment system can generate many temporary files based on certain request types that are processed through the system.  What you may not know is how those files are cleaned up.  There are many rules in Docupresentment that cause the creation of temporary files.  When these files are created, Docupresentment writes an entry into a properties file called the file cache.  This file contains the name, creation date, and expiration time of each temporary file created by each instance of Docupresentment.  Periodically Docupresentment will check the file cache to determine if there are files that are past the expiration time, not unlike that block of cheese festering away in the back of my refrigerator.  However, unlike my ‘fridge cleaning tendencies, Docupresentment is quick to remove files that are past their expiration time.  You, my friend, have the power to control how often Docupresentment inspects the file cache.  Simply set the value for <entry name=”FilePurgeTimeSeconds”> to the number of seconds appropriate for your requirements and you’re set.  Note that file purging happens on a separate thread from normal request processing, so this shouldn’t interfere with response times unless the CPU happens to be really taxed at the point of cache processing.  Finally, after all of this, we get to the final setting I’m going to address in this post: <entry name=”FilePurgeList”>.  The default is “filecache.properties”.  This establishes the root name for the Docupresentment file cache that I mentioned previously.  Docupresentment creates a separate cache file for each instance based on this setting.  If you have two instances, you’ll see two files created: filecache.properties.1 and filecache.properties.2.  Feel free to open these up and check them out. I hope you’ve enjoyed this first foray into the configuration file of Docupresentment.  If you did enjoy it, feel free to drop a comment, I welcome feedback.  If you have ideas for other posts you’d like to see, please do let me know.  You can reach me at [email protected]. ‘Til next time! ###

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Using SPServices &amp; jQuery to Find My Stuff from Multi-Select Person/Group Field

    - by Mark Rackley
    Okay… quick blog post for all you SPServices fans out there. I needed to quickly write a script that would return all the tasks currently assigned to me.  I also wanted it to return any task that was assigned to a group I belong to. This can actually be done with a CAML query, so no big deal, right?  The rub is that the “assigned to” field is a multi-select person or group field. As far as I know (and I actually know so little) you cannot just write a CAML query to return this information. If you can, please leave a comment below and disregard the rest of this blog post… So… what’s a hacker to do? As always, I break things down to their most simple components (I really love the KISS principle and would get it tattooed on my back if people wouldn’t think it meant “Knights In Satan’s Service”. You really gotta be an old far to get that reference).  Here’s what we’re going to do: Get currently logged in user’s name as it is stored in a person field Find all the SharePoint groups the current user belongs to Retrieve a set of assigned tasks from the task list and then find those that are assigned to current  user or group current user belongs to Nothing too hairy… So let’s get started Some Caveats before I continue There are some obvious performance implications with this solution as I make a total of four SPServices calls and there’s a lot of looping going on. Also, the CAML query in this blog has NOT been optimized. If you move forward with this code, tweak it so that it returns a further subset of data or you will see horrible performance if you have a few hundred entries in your task list. Add a date range to the CAML or something. Find some way to limit the results as much as possible. Lastly, if you DO have a better solution, I would like you to share. Iron sharpens iron and all…   Alright, let’s really get started. Get currently logged in user’s name as it is stored in a person field First thing we need to do is understand how a person group looks when you look at the XML returned from a SharePoint Web Service call. It turns out it’s stored like any other multi select item in SharePoint which is <id>;#<value> and when you assign a person to that field the <value> equals the person’s name “Mark Rackley” in my case. This is for Windows Authentication, I would expect this to be different in FBA, but I’m not using FBA. If you want to know what it looks like with FBA you can use the code in this blog and strategically place an alert to see the value.  Anyway… I need to find the name of the user who is currently logged in as it is stored in the person field. This turns out to be one SPServices call: var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     }); As you can see, the “Title” field has the information we need. I suspect (although again, I haven’t tried) that the Title field also contains the user’s name as we need it if I was using FBA. Okay… last thing we need to do is store our users name in an array for processing later: myGroups = new Array(); myGroups.push(userName); Find all the SharePoint groups the current user belongs to Now for the groups. How are groups returned in that XML stream?  Same as the person <ID>;#<Group Name>, and if it’s a mutli select it’s all returned in one big long string “<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>;#<ID>;#<Group Name>”.  So, how do we find all the groups the current user belongs to? This is also a simple SPServices call. Using the “GetGroupCollectionFromUser” operation we can find all the groups a user belongs to. So, let’s execute this method and store all our groups. $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });         }     }); So, all we did in the above code was execute the “GetGroupCollectionFromUser” operation and look for the each “Group” node (row) and store the name for each group in our array that we put the user’s name in previously (myGroups). Now we have an array that contains the current user’s name as it will appear in the person field XML and  all the groups the current user belongs to. The Rest Now comes the easy part for all of you familiar with SPServices. We are going to retrieve our tasks from the Task list using “GetListItems” and look at each entry to see if it belongs to this person. If it does belong to this person we are going to store it for later processing. That code looks something like this: // get list of assigned tasks that aren't closed... *modify the CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                        //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                             //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 }); Some notes on why I did certain things and additional caveats. You will notice in my code that I’m doing an AssignedToString.indexOf(GroupName) to see if the task belongs to the person. This could possibly return bad results if you have SharePoint Group names that are named in such a way that the “IndexOf” returns a false positive.  For example if you have a Group called “My Users” and a group called “My Users – SuperUsers” then if a user belonged to “My Users” it would return a false positive on executing “My Users – SuperUsers”.IndexOf(“My Users”). Make sense? Just be aware of this when naming groups, we don’t have this problem. This is where also some fine-tuning can probably be done by those smarter than me. This is a pretty inefficient method to determine if a task belongs to a user, I mean what if a user belongs to 20 groups? That’s a LOT of looping.  See all the opportunities I give you guys to do something fun?? Also, why am I storing my values in an array instead of just writing them out to a Div? Well.. I want to pass my data to a jQuery library to format it all nice and pretty and an Array is a great way to do that. When all is said and done and we put all the code together it looks like:   $(document).ready(function() {         var userName = $().SPServices.SPGetCurrentUser({                     fieldName: "Title",                     debug: false                     });         myGroups = new Array();     myGroups.push(userName );       $().SPServices({       operation: "GetGroupCollectionFromUser",       userLoginName: $().SPServices.SPGetCurrentUser(),       async: false,       completefunc: function(xData, Status) {          $(xData.responseXML).find("[nodeName=Group]").each(function() {                 myGroups.push($(this).attr("Name"));          });                      // get list of assigned tasks that aren't closed... *modify this CAML to perform better!*             $().SPServices({                   operation: "GetListItems",                   async: false,                   listName: "Tasks",                   CAMLViewFields: "<ViewFields>" +                             "<FieldRef Name='AssignedTo' />" +                             "<FieldRef Name='Title' />" +                             "<FieldRef Name='StartDate' />" +                             "<FieldRef Name='EndDate' />" +                             "<FieldRef Name='Status' />" +                             "</ViewFields>",                   CAMLQuery: "<Query><Where><And><IsNotNull><FieldRef Name='AssignedTo'/></IsNotNull><Neq><FieldRef Name='Status'/><Value Type='Text'>Completed</Value></Neq></And></Where></Query>",                     completefunc: function (xData, Status) {                         var aDataSet = new Array();                         //loop through each returned Task                         $(xData.responseXML).find("[nodeName=z:row]").each(function() {                             //store the multi-select string of who task is assigned to                             var assignedToString = $(this).attr("ows_AssignedTo");                             found = false;                            //loop through the persons name and all the groups they belong to                             for(var i=0; i<myGroups.length; i++) {                                 //if the person's name or group exists in the assigned To string                                 //then the task is assigned to them                                 if (assignedToString.indexOf(myGroups[i]) >= 0){                                     found = true;                                     break;                                 }                             }                            //if the Task belongs to this person then store or display it                             //(I'm storing it in an array)                             if (found){                                 var thisName = $(this).attr("ows_Title");                                 var thisStartDate = $(this).attr("ows_StartDate");                                 var thisEndDate = $(this).attr("ows_EndDate");                                 var thisStatus = $(this).attr("ows_Status");                                                                  var aDataRow=new Array(                                     thisName,                                     thisStartDate,                                     thisEndDate,                                     thisStatus);                                 aDataSet.push(aDataRow);                             }                          });                          SomeFunctionToDisplayData(aDataSet);                     }                 });       }    });  }); Final Thoughts So, there you have it. Take it and run with it. Make it something cool (and tell me how you did it). Another possible way to improve performance in this scenario is to use a DVWP to display the tasks and use jQuery and the “myGroups” array from this blog post to hide all those rows that don’t belong to the current user. I haven’t tried it, but it does move some of the processing off to the server (generating the view) so it may perform better.  As always, thanks for stopping by… hope you have a Merry Christmas…

    Read the article

  • ASP.Net MVC Interview Questions and Answers

    - by Samir R. Bhogayta
    About ASP.Net MVC The ASP.Net MVC is the framework provided by Microsoft that lets you develop the applications that follows the principles of Model-View-Controller (MVC) design pattern. The .Net programmers new to MVC thinks that it is similar to WebForms Model (Normal ASP.Net), but it is far different from the WebForms programming.  This article will tell you how to quick learn the basics of MVC along with some frequently asked interview questions and answers on ASP.Net MVC 1. What is ASP.Net MVC The ASP.Net MVC is the framework provided by Microsoft to achieve     separation of concerns that leads to easy maintainability of the     application. Model is supposed to handle data related activity View deals with user interface related work Controller is meant for managing the application flow by communicating between Model and View. Normal 0 false false false EN-US X-NONE X-NONE 2. Why to use ASP.Net MVC The strength of MVC (i.e. ASP.Net MVC) listed below will answer this question MVC reduces the dependency between the components; this makes your code more testable. MVC does not recommend use of server controls, hence the processing time required to generate HTML response is drastically reduced. The integration of java script libraries like jQuery, Microsoft MVC becomes easy as compared to Webforms approach. 3. What do you mean by Razor The Razor is the new View engine introduced in MVC 3.0. The View engine is responsible for processing the view files [e.g. .aspx, .cshtml] in order to generate HTML response. The previous versions of MVC were dependent on ASPX view engine.  4. Can we use ASPX view engine in latest versions of MVC Yes. The Recommended way is to prefer Razor View 5. What are the benefits of Razor View?      The syntax for server side code is simplified      The length of code is drastically reduced      Razor syntax is easy to learn and reduces the complexity Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 6. What is the extension of Razor View file? .cshtml (for c#) and .vbhtml (for vb) 7. How to create a Controller in MVC Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Create a simple class and extend it from Controller class. The bare minimum requirement for a class to become a controller is to inherit it from ControllerBase is the class that is required to inherit to create the controller but Controller class inherits from ControllerBase. 8. How to create an Action method in MVC Add a simple method inside a controller class with ActionResult return type. 9. How to access a view on the server    The browser generates the request in which the information like Controller name, Action Name and Parameters are provided, when server receives this URL it resolves the Name of Controller and Action, after that it calls the specified action with provided parameters. Action normally does some processing and returns the ViewResult by specifying the view name (blank name searches according to naming conventions).   10. What is the default Form method (i.e. GET, POST) for an action method GET. To change this you can add an action level attribute e.g [HttpPost] Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} 11. What is a Filter in MVC? When user (browser) sends a request to server an action method of a controller gets invoked; sometimes you may require executing a custom code before or after action method gets invoked, this custom code is called as Filter. 12. What are the different types of Filters in MVC? a. Authorization filter b. Action filter c. Result filter d. Exception filter [Do not forget the order mentioned above as filters gets executed as per above mentioned sequence] 13. Explain the use of Filter with an example? Suppose you are working on a MVC application where URL is sent in an encrypted format instead of a plain text, once encrypted URL is received by server it will ignore action parameters because of URL encryption. To solve this issue you can create global action filter by overriding OnActionExecuting method of controller class, in this you can extract the action parameters from the encrypted URL and these parameters can be set on filterContext to send plain text parameters to the actions.     14. What is a HTML helper? A HTML helper is a method that returns string; return string usually is the HTML tag. The standard HTML helpers (e.g. Html.BeginForm(),Html.TextBox()) available in MVC are lightweight as it does not rely on event model or view state as that of in ASP.Net server controls.

    Read the article

  • CreationName for SSIS 2008 and adding components programmatically

    If you are building SSIS 2008 packages programmatically and adding data flow components, you will probably need to know the creation name of the component to add. I can never find a handy reference when I need one, hence this rather mundane post. See also CreationName for SSS 2005. We start with a very simple snippet for adding a component: // Add the Data Flow Task package.Executables.Add("STOCK:PipelineTask"); // Get the task host wrapper, and the Data Flow task TaskHost taskHost = package.Executables[0] as TaskHost; MainPipe dataFlowTask = (MainPipe)taskHost.InnerObject; // Add OLE-DB source component - ** This is where we need the creation name ** IDTSComponentMetaData90 componentSource = dataFlowTask.ComponentMetaDataCollection.New(); componentSource.Name = "OLEDBSource"; componentSource.ComponentClassID = "DTSAdapter.OLEDBSource.2"; So as you can see the creation name for a OLE-DB Source is DTSAdapter.OLEDBSource.2. CreationName Reference  ADO NET Destination Microsoft.SqlServer.Dts.Pipeline.ADONETDestination, Microsoft.SqlServer.ADONETDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 ADO NET Source Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter, Microsoft.SqlServer.ADONETSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Aggregate DTSTransform.Aggregate.2 Audit DTSTransform.Lineage.2 Cache Transform DTSTransform.Cache.1 Character Map DTSTransform.CharacterMap.2 Checksum Konesans.Dts.Pipeline.ChecksumTransform.ChecksumTransform, Konesans.Dts.Pipeline.ChecksumTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Conditional Split DTSTransform.ConditionalSplit.2 Copy Column DTSTransform.CopyMap.2 Data Conversion DTSTransform.DataConvert.2 Data Mining Model Training MSMDPP.PXPipelineProcessDM.2 Data Mining Query MSMDPP.PXPipelineDMQuery.2 DataReader Destination Microsoft.SqlServer.Dts.Pipeline.DataReaderDestinationAdapter, Microsoft.SqlServer.DataReaderDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Derived Column DTSTransform.DerivedColumn.2 Dimension Processing MSMDPP.PXPipelineProcessDimension.2 Excel Destination DTSAdapter.ExcelDestination.2 Excel Source DTSAdapter.ExcelSource.2 Export Column TxFileExtractor.Extractor.2 Flat File Destination DTSAdapter.FlatFileDestination.2 Flat File Source DTSAdapter.FlatFileSource.2 Fuzzy Grouping DTSTransform.GroupDups.2 Fuzzy Lookup DTSTransform.BestMatch.2 Import Column TxFileInserter.Inserter.2 Lookup DTSTransform.Lookup.2 Merge DTSTransform.Merge.2 Merge Join DTSTransform.MergeJoin.2 Multicast DTSTransform.Multicast.2 OLE DB Command DTSTransform.OLEDBCommand.2 OLE DB Destination DTSAdapter.OLEDBDestination.2 OLE DB Source DTSAdapter.OLEDBSource.2 Partition Processing MSMDPP.PXPipelineProcessPartition.2 Percentage Sampling DTSTransform.PctSampling.2 Performance Counters Source DataCollectorTransform.TxPerfCounters.1 Pivot DTSTransform.Pivot.2 Raw File Destination DTSAdapter.RawDestination.2 Raw File Source DTSAdapter.RawSource.2 Recordset Destination DTSAdapter.RecordsetDestination.2 RegexClean Konesans.Dts.Pipeline.RegexClean.RegexClean, Konesans.Dts.Pipeline.RegexClean, Version=2.0.0.0, Culture=neutral, PublicKeyToken=d1abe77e8a21353e Row Count DTSTransform.RowCount.2 Row Count Plus Konesans.Dts.Pipeline.RowCountPlusTransform.RowCountPlusTransform, Konesans.Dts.Pipeline.RowCountPlusTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Number Konesans.Dts.Pipeline.RowNumberTransform.RowNumberTransform, Konesans.Dts.Pipeline.RowNumberTransform, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b2ab4a111192992b Row Sampling DTSTransform.RowSampling.2 Script Component Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost, Microsoft.SqlServer.TxScript, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Slowly Changing Dimension DTSTransform.SCD.2 Sort DTSTransform.Sort.2 SQL Server Compact Destination Microsoft.SqlServer.Dts.Pipeline.SqlCEDestinationAdapter, Microsoft.SqlServer.SqlCEDest, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 SQL Server Destination DTSAdapter.SQLServerDestination.2 Term Extraction DTSTransform.TermExtraction.2 Term Lookup DTSTransform.TermLookup.2 Trash Destination Konesans.Dts.Pipeline.TrashDestination.Trash, Konesans.Dts.Pipeline.TrashDestination, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b8351fe7752642cc TxTopQueries DataCollectorTransform.TxTopQueries.1 Union All DTSTransform.UnionAll.2 Unpivot DTSTransform.UnPivot.2 XML Source Microsoft.SqlServer.Dts.Pipeline.XmlSourceAdapter, Microsoft.SqlServer.XmlSrc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91 Here is a simple console program that can be used to enumerate the pipeline components installed on your machine, and dumps out a list of all components like that above. You will need to add a reference to the Microsoft.SQLServer.ManagedDTS assembly. using System; using System.Diagnostics; using Microsoft.SqlServer.Dts.Runtime; public class Program { static void Main(string[] args) { Application application = new Application(); PipelineComponentInfos componentInfos = application.PipelineComponentInfos; foreach (PipelineComponentInfo componentInfo in componentInfos) { Debug.WriteLine(componentInfo.Name + "\t" + componentInfo.CreationName); } Console.Read(); } }

    Read the article

  • Sun Fire X4270 M3 SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Two-Tier Standard Sales and Distribution (SD) Benchmark

    - by Brian
    Oracle's Sun Fire X4270 M3 server achieved 8,320 SAP SD Benchmark users running SAP enhancement package 4 for SAP ERP 6.0 with unicode software using Oracle Database 11g and Oracle Solaris 10. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat both IBM Flex System x240 and IBM System x3650 M4 server running DB2 9.7 and Windows Server 2008 R2 Enterprise Edition. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the HP ProLiant BL460c Gen8 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 6%. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat Cisco UCS C240 M3 server running SQL Server 2008 and Windows Server 2008 R2 Datacenter Edition by 9%. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the Fujitsu PRIMERGY RX300 S7 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 10%. Performance Landscape SAP-SD 2-Tier Performance Table (in decreasing performance order). SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results (benchmark version from January 2009 to April 2012) System OS Database Users SAPERP/ECCRelease SAPS SAPS/Proc Date Sun Fire X4270 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Oracle Solaris 10 Oracle Database 11g 8,320 20096.0 EP4(Unicode) 45,570 22,785 10-Apr-12 IBM Flex System x240 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,960 20096.0 EP4(Unicode) 43,520 21,760 11-Apr-12 HP ProLiant BL460c Gen8 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,865 20096.0 EP4(Unicode) 42,920 21,460 29-Mar-12 IBM System x3650 M4 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,855 20096.0 EP4(Unicode) 42,880 21,440 06-Mar-12 Cisco UCS C240 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 DE SQL Server 2008 7,635 20096.0 EP4(Unicode) 41,800 20,900 06-Mar-12 Fujitsu PRIMERGY RX300 S7 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,570 20096.0 EP4(Unicode) 41,320 20,660 06-Mar-12 Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark. Configuration and Results Summary Hardware Configuration: Sun Fire X4270 M3 2 x 2.90 GHz Intel Xeon E5-2690 processors 128 GB memory Sun StorageTek 6540 with 4 * 16 * 300GB 15Krpm 4Gb FC-AL Software Configuration: Oracle Solaris 10 Oracle Database 11g SAP enhancement package 4 for SAP ERP 6.0 (Unicode) Certified Results (published by SAP): Number of benchmark users: 8,320 Average dialog response time: 0.95 seconds Throughput: Fully processed order line: 911,330 Dialog steps/hour: 2,734,000 SAPS: 45,570 SAP Certification: 2012014 Benchmark Description The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments. SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products. See Also SAP Benchmark Website Sun Fire X4270 M3 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Two-tier SAP Sales and Distribution (SD) standard SAP SD benchmark based on SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark as of 04/11/12: Sun Fire X4270 M3 (2 processors, 16 cores, 32 threads) 8,320 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, Oracle 11g, Solaris 10, Cert# 2012014. IBM Flex System x240 (2 processors, 16 cores, 32 threads) 7,960 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012016. IBM System x3650 M4 (2 processors, 16 cores, 32 threads) 7,855 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012010. Cisco UCS C240 M3 (2 processors, 16 cores, 32 threads) 7,635 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 DE, Cert# 2012011. Fujitsu PRIMERGY RX300 S7 (2 processors, 16 cores, 32 threads) 7,570 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012008. HP ProLiant DL380p Gen8 (2 processors, 16 cores, 32 threads) 7,865 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012012. SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • MS Expression Web 4 SuperPreview – Big Disappointment

    - by smehaffie
    I just downloaded Expression 4 and expected to see some improvements in the Web4 SuperPreview application.  The one main function I was expecting to be in this release is the ability to enter data and click on links so pages of the sites could be assessed.  There a many use cases where this functionality is needed and there were quite a few people vocal about it when MS first released the application. 1) Where you have to login to a site to access either all the content or some of the content on the site 2) Where you have to enter date in a certain order and cannot go to next page until the previous pages data is filled out (payment process, storefront, etc). 3) Where you just want to make sure things are displayed correctly based on data entered (validation messages, etc). 4 ) You need to make sure the links go to the page in all the different browsers.  I have seen scenerios where links worked fine in all but one browser, or for some reason the text showed on screen but it was not a clickable link. IMO this application is a great idea, but until MS fixed the above issue and add the functionality above the SuperPreview is totally worthless unless you need it to test a totally static site that does not require any user input at all to get access to the content.  There is no reason this feature should not have been in this release, and it should have been a priority to make sure it was. Let me know how you feel about the new version of the Web4 SuperPreview application.  Did MS really miss the target on this by not adding this functionality, or do I think it is a bigger deal that it really is?  If you are actively using SuperPreview, please post how you are using it and the type of sites you are using it on.

    Read the article

  • Freelancing - Share the source code?

    - by Tec
    I have developed a couple of form based windows application in vb.net for a client and they all work well and he paid me through a freelance site. I have handed over the executable and the setup to the client and all was well. Now the client wants the source code for the application. Is there a general practice on sharing the source code with the client? Please note - the client never mentioned he needs the source code and he is now asking for it after a week after the app was completed and he made the payment. I don't mind sharing the source code, but I am not sure if I should. This probably means the client would not hire me again and the bigger question is the source code really his property? This question may have been asked a few times, but I cannot still draw a conclusion on what is right. update To answer some of the questions: The source code was not mentioned at all. There was no exclusive contract signed except for the usual agreement of the freelance site. I am not sure if software development comes under work for hire and is it valid for users outside of the US? The reason for not sharing the source code was this was a very small project and I got paid for a mere few hours. So if I have an option then definitely I would want to keep the source code to myself as that gives a possibility of the client coming back. The application works flawlessly and the code is solid. Also, the task that the client wanted to achieve was very challenging and I would not like other programmers (competitors) to know how I achieved it. So unless I get the confirmation that the source code is purely the property of the client, I would not be willing to share it.

    Read the article

  • Big Data – Buzz Words: What is MapReduce – Day 7 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is Hadoop. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – MapReduce. What is MapReduce? MapReduce was designed by Google as a programming model for processing large data sets with a parallel, distributed algorithm on a cluster. Though, MapReduce was originally Google proprietary technology, it has been quite a generalized term in the recent time. MapReduce comprises a Map() and Reduce() procedures. Procedure Map() performance filtering and sorting operation on data where as procedure Reduce() performs a summary operation of the data. This model is based on modified concepts of the map and reduce functions commonly available in functional programing. The library where procedure Map() and Reduce() belongs is written in many different languages. The most popular free implementation of MapReduce is Apache Hadoop which we will explore tomorrow. Advantages of MapReduce Procedures The MapReduce Framework usually contains distributed servers and it runs various tasks in parallel to each other. There are various components which manages the communications between various nodes of the data and provides the high availability and fault tolerance. Programs written in MapReduce functional styles are automatically parallelized and executed on commodity machines. The MapReduce Framework takes care of the details of partitioning the data and executing the processes on distributed server on run time. During this process if there is any disaster the framework provides high availability and other available modes take care of the responsibility of the failed node. As you can clearly see more this entire MapReduce Frameworks provides much more than just Map() and Reduce() procedures; it provides scalability and fault tolerance as well. A typical implementation of the MapReduce Framework processes many petabytes of data and thousands of the processing machines. How do MapReduce Framework Works? A typical MapReduce Framework contains petabytes of the data and thousands of the nodes. Here is the basic explanation of the MapReduce Procedures which uses this massive commodity of the servers. Map() Procedure There is always a master node in this infrastructure which takes an input. Right after taking input master node divides it into smaller sub-inputs or sub-problems. These sub-problems are distributed to worker nodes. A worker node later processes them and does necessary analysis. Once the worker node completes the process with this sub-problem it returns it back to master node. Reduce() Procedure All the worker nodes return the answer to the sub-problem assigned to them to master node. The master node collects the answer and once again aggregate that in the form of the answer to the original big problem which was assigned master node. The MapReduce Framework does the above Map () and Reduce () procedure in the parallel and independent to each other. All the Map() procedures can run parallel to each other and once each worker node had completed their task they can send it back to master code to compile it with a single answer. This particular procedure can be very effective when it is implemented on a very large amount of data (Big Data). The MapReduce Framework has five different steps: Preparing Map() Input Executing User Provided Map() Code Shuffle Map Output to Reduce Processor Executing User Provided Reduce Code Producing the Final Output Here is the Dataflow of MapReduce Framework: Input Reader Map Function Partition Function Compare Function Reduce Function Output Writer In a future blog post of this 31 day series we will explore various components of MapReduce in Detail. MapReduce in a Single Statement MapReduce is equivalent to SELECT and GROUP BY of a relational database for a very large database. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – HDFS. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • AdSense Mobile Interface – I’m Loving It!

    - by Gopinath
    I love checking AdSense earnings every day on my mobile. All these days my mobile browser, opera, rendered the heavy desktop version of AdSense interface and it was tough to navigate around and see the earnings. To solve the problems of me as well as millions of other AdSense users, Google yesterday released a mobile version of AdSense user interface that works on almost all the mobile platforms – iOS, Android, Windows Phone 7, Symbian and many others. If you have opted for the new beta user interface of AdSense, you will be presented with the mobile version when you https://www.google.com/adsense on your mobile. Here is a screen grab of how looks like on iPhone and Android device.It looks similar on my Nokia mobile too. The Adsense interface for mobile is very nice – on the home page I can quickly have a look at today’s earnings, recent payment amount, last month finalized amount and the total unpaid balances. The quick reports option available the bottom of the screen lets me access a graphical view of useful earnings reports like – Last 7 days, Last 30 days, This Month and Last Month. You can also create your own reports and save them to this list for quick viewing. To view the graphical reports, you don’t need FLASH on your mobile. For more details check out the official post on Google Adsense blog. This article titled,AdSense Mobile Interface – I’m Loving It!, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Master Note for Generic Data Warehousing

    - by lajos.varady(at)oracle.com
    ++++++++++++++++++++++++++++++++++++++++++++++++++++ The complete and the most recent version of this article can be viewed from My Oracle Support Knowledge Section. Master Note for Generic Data Warehousing [ID 1269175.1] ++++++++++++++++++++++++++++++++++++++++++++++++++++In this Document   Purpose   Master Note for Generic Data Warehousing      Components covered      Oracle Database Data Warehousing specific documents for recent versions      Technology Network Product Homes      Master Notes available in My Oracle Support      White Papers      Technical Presentations Platforms: 1-914CU; This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review. Applies to: Oracle Server - Enterprise Edition - Version: 9.2.0.1 to 11.2.0.2 - Release: 9.2 to 11.2Information in this document applies to any platform. Purpose Provide navigation path Master Note for Generic Data Warehousing Components covered Read Only Materialized ViewsQuery RewriteDatabase Object PartitioningParallel Execution and Parallel QueryDatabase CompressionTransportable TablespacesOracle Online Analytical Processing (OLAP)Oracle Data MiningOracle Database Data Warehousing specific documents for recent versions 11g Release 2 (11.2)11g Release 1 (11.1)10g Release 2 (10.2)10g Release 1 (10.1)9i Release 2 (9.2)9i Release 1 (9.0)Technology Network Product HomesOracle Partitioning Advanced CompressionOracle Data MiningOracle OLAPMaster Notes available in My Oracle SupportThese technical articles have been written by Oracle Support Engineers to provide proactive and top level information and knowledge about the components of thedatabase we handle under the "Database Datawarehousing".Note 1166564.1 Master Note: Transportable Tablespaces (TTS) -- Common Questions and IssuesNote 1087507.1 Master Note for MVIEW 'ORA-' error diagnosis. For Materialized View CREATE or REFRESHNote 1102801.1 Master Note: How to Get a 10046 trace for a Parallel QueryNote 1097154.1 Master Note Parallel Execution Wait Events Note 1107593.1 Master Note for the Oracle OLAP OptionNote 1087643.1 Master Note for Oracle Data MiningNote 1215173.1 Master Note for Query RewriteNote 1223705.1 Master Note for OLTP Compression Note 1269175.1 Master Note for Generic Data WarehousingWhite Papers Transportable Tablespaces white papers Database Upgrade Using Transportable Tablespaces:Oracle Database 11g Release 1 (February 2009) Platform Migration Using Transportable Database Oracle Database 11g and 10g Release 2 (August 2008) Database Upgrade using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007) Platform Migration using Transportable Tablespaces: Oracle Database 10g Release 2 (April 2007)Parallel Execution and Parallel Query white papers Best Practices for Workload Management of a Data Warehouse on the Sun Oracle Database Machine (June 2010) Effective resource utilization by In-Memory Parallel Execution in Oracle Real Application Clusters 11g Release 2 (Feb 2010) Parallel Execution Fundamentals in Oracle Database 11g Release 2 (November 2009) Parallel Execution with Oracle Database 10g Release 2 (June 2005)Oracle Data Mining white paper Oracle Data Mining 11g Release 2 (March 2010)Partitioning white papers Partitioning with Oracle Database 11g Release 2 (September 2009) Partitioning in Oracle Database 11g (June 2007)Materialized Views and Query Rewrite white papers Oracle Materialized Views  and Query Rewrite (May 2005) Improving Performance using Query Rewrite in Oracle Database 10g (December 2003)Database Compression white papers Advanced Compression with Oracle Database 11g Release 2 (September 2009) Table Compression in Oracle Database 10g Release 2 (May 2005)Oracle OLAP white papers On-line Analytic Processing with Oracle Database 11g Release 2 (September 2009) Using Oracle Business Intelligence Enterprise Edition with the OLAP Option to Oracle Database 11g (July 2008)Generic white papers Enabling Pervasive BI through a Practical Data Warehouse Reference Architecture (February 2010) Optimizing and Protecting Storage with Oracle Database 11g Release 2 (November 2009) Oracle Database 11g for Data Warehousing and Business Intelligence (August 2009) Best practices for a Data Warehouse on Oracle Database 11g (September 2008)Technical PresentationsA selection of ObE - Oracle by Examples documents: Generic Using Basic Database Functionality for Data Warehousing (10g) Partitioning Manipulating Partitions in Oracle Database (11g Release 1) Using High-Speed Data Loading and Rolling Window Operations with Partitioning (11g Release 1) Using Partitioned Outer Join to Fill Gaps in Sparse Data (10g) Materialized View and Query Rewrite Using Materialized Views and Query Rewrite Capabilities (10g) Using the SQLAccess Advisor to Recommend Materialized Views and Indexes (10g) Oracle OLAP Using Microsoft Excel With Oracle 11g Cubes (how to analyze data in Oracle OLAP Cubes using Excel's native capabilities) Using Oracle OLAP 11g With Oracle BI Enterprise Edition (Creating OBIEE Metadata for OLAP 11g Cubes and querying those in BI Answers) Building OLAP 11g Cubes Querying OLAP 11g Cubes Creating Interactive APEX Reports Over OLAP 11g CubesSelection of presentations from the BIWA website:Extreme Data Warehousing With Exadata  by Hermann Baer (July 2010) (slides 2.5MB, recording 54MB)Data Mining Made Easy! Introducing Oracle Data Miner 11g Release 2 New "Work flow" GUI   by Charlie Berger (May 2010) (slides 4.8MB, recording 85MB )Best Practices for Deploying a Data Warehouse on Oracle Database 11g  by Maria Colgan (December 2009)  (slides 3MB, recording 18MB, white paper 3MB )

    Read the article

  • SQL SERVER – List of All the Samples Database Available to Download for FREE

    - by Pinal Dave
    It is pretty much very common to have a sample database for any database product. Different companies keep on improving their product and keep on coming up with innovation in their product. To demonstrate the capability of their new enhancements they need the sample database. Microsoft have various sample database available for free download for their SQL Server Product. I have collected them here in a single blog post. Download an AdventureWorks Database The AdventureWorks OLTP database supports standard online transaction processing scenarios for a fictitious bicycle manufacturer (Adventure Works Cycles). Scenarios include Manufacturing, Sales, Purchasing, Product Management, Contact Management, and Human Resources. Coconut Dal Coconut Dal is a lightweight data access layer, for use in projects where the Entity Framework cannot be used or Microsoft’s Enterprise Library Data Block is unsuitable. Anyone who is handwriting ADO.NET should use a library instead and Coconut Dal might be the answer.  DataBooster – Extension to ADO.NET Data Provider The dbParallel DataBooster library is a high-performance extension to ADO.NET Data Provider, includes two aspects: 1) A slimmed down API encapsulation which simplified the most common data access operations (DbConnection -> DbCommand -> DbParameter -> DbDataReader) into a single class DbAccess, to help application with a clean DAL, avoid over-packing and redundant-copy of data transfer. 2) A booster for writing mass data onto database. Base on a rational utilization of database concurrency and a effective utilization of network bandwidth. Tabular AMO 2012 The sample is made of two project parts. The first part is a library of functions to manage tabular models -AMO2Tabular V2-. The second part is a sample to build a tabular model -AdventureWorks Tabular AMO 2012- using the AMO2Tabular library; the created model is similar to the ‘AdventureWorks Tabular Model 2012. SQL Server Analysis Services Product Samples SQL Server Analysis Services provides, a unified and integrated view of all your business data as the foundation for all of your traditional reporting, online analytical processing (OLAP) analysis, Key Performance Indicator (KPI) scorecards, and data mining. Analysis Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Reporting Services Product Samples This project contains Reporting Services samples released with Microsoft SQL Server product. These samples are in the following five categories: Application Samples, Extension Samples, Model Samples, Report Samples, and Script Samples. If you are interested in contributing Reporting Services samples, please let us know by posting in the developers’ forum. Reporting Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008 R2 PCU1. For many of these samples you will also need to download the AdventureWorks family of databases. SQL Server Integration Services Product Samples This project contains Integration Services samples released with Microsoft SQL Server product. These samples are in the following two categories: Package Samples and Programming Samples. If you are interested in contributing Integration Services samples, please let us know by posting in the developers’ forum. Integration Services Samples for SQL Server 2008 R2 This release is dedicated to the samples that ship for Microsoft SQL Server 2008R2. For many of these samples you will also need to download the AdventureWorks family of databases. Windows Azure SQL Reporting Admin Sample The SQLReportingAdmin sample for Windows Azure SQL Reporting demonstrates the usage of SQL Reporting APIs, and manages (add/update/delete) permissions of SQL Reporting users. Windows Azure SQL Reporting ReportViewer-SOAP API usage sample These sample projects demonstrate how to embed a Microsoft ReportViewer control that points to reports hosted on SQL Reporting report servers and how to use SQL Reporting SOAP APIs in your Windows Azure Web application. Enterprise Library 5.0 – Integration Pack for Windows Azure This NuGet package contains a zip file with the source code for the Enterprise Library Integration Pack for Windows Azure.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Sample Database

    Read the article

  • Borrow Harry Potter’s eBooks from Amazon Kindle Owner’s Lending Library

    - by Rekha
    From June 19, 2012, Amazon.com customers can borrow All 7 Harry Potter books from Kindle Owner’s Lending Library (KOLL). The books are available in English, French, Italian, German and Spanish. Prime Members of Amazon owning Kindle, can choose from 145,000 titles. US customers can borrow for free with no due dates and also as frequently as a month. There are no limits on the number of copies available for the customers. Anyone can read the books simultaneously by borrowing them. The bookmarks in the borrowed books are saved, for the customers to continue reading where they stopped even when they re-borrow the book. Prime members also have the opportunity to enjoy free two day shipping on millions of items and  unlimited streaming of over 18,000 movies and TV episodes. Amazon has got an exclusive license from J.K. Rowling’s Pottermore. The series cost between $7.99 and $9.99 for the individual books. Pottermore’s investment on these books are compensated by Amazon’s large payment. Via Amazon. CC Image Credit Amazon KOLL.

    Read the article

  • Innovative SPARC: Lighting a Fire Under Oracle's New Hardware Business

    - by Paulo Folgado
    "There's a certain level of things you can do with commercially available parts," says Oracle Executive Vice President Mike Splain. But, he notes, you can do so much more if you design the parts yourself. Mike Splain,EVP, OracleYou can, for example, design cryptographic accelerators into your microprocessors so customers can run their networks fully encrypted if they choose.Of course, it helps if you've already built multiple processing "cores" into those chips so they can handle all that encrypting and decrypting while still getting their other work done.System on a ChipAs the leader of Oracle Microelectronics, Mike knows how implementing clever innovations in silicon can give systems a real competitive advantage.The SPARC microprocessors that his team designed at Sun pioneered the concept of multiple cores several years ago, and the UltraSPARC T2 processor--the industry's first "system on a chip"--packs up to eight cores per chip, each running as many as eight threads at once. That's the most cores and threads of any general-purpose processor. Looking back, Mike points out that the real value of large enterprise-class servers was their ability to run a lot of very large applications in parallel."The beauty of our CMT [chip multi-threading] machines is you can get that same kind of parallel-processing capability at a much lower cost and in a much smaller footprint," he says.The Whole StackWhat has Mike excited these days is that suddenly the opportunity to innovate is much bigger as part of Oracle."In my group, we used to look up the software stack and say, 'We can do any innovation we want, provided the only thing we have to change is what's in the Solaris operating system'--or maybe Java," he says. "If we wanted to change things beyond that, we'd have to go outside the walls of Sun and we'd have to convince the vendors: 'You have to align with us, you have to test with us, you have to build for us, and then you'll reap the benefits.' Now we get access to the entire stack. We can look all the way through the stack and say, 'Okay, what would make the database go faster? What would make the middleware go faster?'"Changing the WorldMike and his microelectronics team also like the fact that Oracle is not just any software company. We're #1 in database, middleware, business intelligence, and more."We're like all the other engineers from Sun; we believe we can change the world, if we can just figure out how to get people to pay attention to us," he says. "Now there's a mechanism at Oracle--much more so than we ever had at Sun."He notes, too, that every innovation in SPARC has involved some combination of hardware and softwareoptimization."Take our cryptography framework, for example. Sure, we can accelerate rapidly, but the Solaris OS has to provide the right set of interfaces that applications can tap into," Mike says. "Same thing with our multicore architecture. We have to have software that can utilize all those threads and run in parallel." His engineers, he points out, have never been interested in producing chips that sell as mere components."Our chips are always designed to go into systems and be combined with various pieces of software," he says. "Our job is to enable the creation of systems."

    Read the article

  • Ubuntu 12.04 doesn't recgonize m CPU correctly

    - by Nightshaxx
    My computer is running ubuntu 12.04 (64bit), and I have a AMD Athlon(tm) X4 760K Quad Core Processor which is about 3.8ghz (and an Radeon HD 7770 GPU). Yet, when I type in cat /proc/cpuinfo - I get: processor : 0 vendor_id : AuthenticAMD cpu family : 21 model : 19 model name : AMD Athlon(tm) X4 760K Quad Core Processor stepping : 1 microcode : 0x6001119 cpu MHz : 1800.000 cache size : 2048 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 2 apicid : 16 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1 bogomips : 7599.97 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro processor : 1 vendor_id : AuthenticAMD cpu family : 21 model : 19 model name : AMD Athlon(tm) X4 760K Quad Core Processor stepping : 1 microcode : 0x6001119 cpu MHz : 1800.000 cache size : 2048 KB physical id : 0 siblings : 4 core id : 1 cpu cores : 2 apicid : 17 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1 bogomips : 7599.97 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro processor : 2 vendor_id : AuthenticAMD cpu family : 21 model : 19 model name : AMD Athlon(tm) X4 760K Quad Core Processor stepping : 1 microcode : 0x6001119 cpu MHz : 1800.000 cache size : 2048 KB physical id : 0 siblings : 4 core id : 2 cpu cores : 2 apicid : 18 initial apicid : 2 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1 bogomips : 7599.97 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro processor : 3 vendor_id : AuthenticAMD cpu family : 21 model : 19 model name : AMD Athlon(tm) X4 760K Quad Core Processor stepping : 1 microcode : 0x6001119 cpu MHz : 1800.000 cache size : 2048 KB physical id : 0 siblings : 4 core id : 3 cpu cores : 2 apicid : 19 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold bmi1 bogomips : 7599.97 TLB size : 1536 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm 100mhzsteps hwpstate cpb eff_freq_ro The important part of all this being, cpu MHz : 1800.000 which indicates that I have only 1.8ghz of processing power, which is totally wrong. Is it something with drivers or Ubuntu?? Also, will windows recognize all of my processing power? Thanks! (NOTE: My cpu doesn't have intigrated graphics

    Read the article

  • Challenges and Opportunities to Drive Change in the Healthcare System Explored at America’s Health Insurance Plans Exchange Conference and Institute 2013

    - by elaine blog
    The program theme at the June America’s Health Insurance Plans (AHIP) Exchange Conference and AHIP’s Institute 2013 was Transforming Our Health Care System: Navigating and Succeeding in the New Marketplace.  Topics included care delivery transformation, innovation for a new healthcare eco system, Health Insurance Exchanges, the nexus of consumerism, retail and healthcare, driving value through improved operations and leveraging technology, data and innovation to transform care. Oracle participated as a sponsor of both conferences, signaling the significant investment and activity Oracle continues to make in helping health plans, providers and government agencies become more efficient and more relevant in the healthcare market place. AHIP is a national trade association representing the health insurance industry. AHIP’s members provide health and supplemental benefits to more than 200 million Americans through employer-sponsored coverage, the individual insurance market and public programs such as Medicare and Medicaid.   AHIP advocates for public policies that expand access to affordable health care. Health plans are focusing on the Health Insurance Exchanges and the opportunities they offer to provide better access and higher quality healthcare.  With the opportunities come operational challenges to implementation and innovative technology solutions to consider.   At the Exchange Conference, Oracle hosted a breakfast symposium on “Strategies for Success:  Driving Business Transformation in the Growing Health Insurance Exchange Market”. With Health Insurance Exchanges as catalysts for change, attendees learned about how to achieve integration within an Exchange and deploy new business strategies to support health reform initiatives. Discussion covered steps and processes to successfully establish and implement enrollment systems, quote to card activities, program pricing, claims billing, automated claims processing and new customer service tools. Piyush Pushkar, COO of Benefitalign, an Oracle partner that provides solutions to adopt innovative business models for retail, HIX, consumer-centric health plan and benefits administration, spoke on the state of the Exchanges in the U.S. and the activities health plans are engaged in to support individuals entering the healthcare system, including sales automation, member enrollment automation/portals and integration strategies with the Exchanges. The Oracle and Benefitalign partnership allows seamless integration between a health plan enrollment solution with the HIX individual market and allows for the health plan to customize and characterize the offerings available to the HIX that may or may not be available through other channels.  This approach can benefit the health plan through separation of interests, but also because some state-run HIXs require such separation. Janice W. Young, Program Director, Payer IT Strategies, IDC Health Insights, reviewed a survey of health plans on their investment priorities for this last year as well as this year.  She also identified the 2013-2015 strategies of go/get to market with front end and compliance investments; leveraging existing business processes and internal technologies; and establishing best practices.  Of key interest to the audience was a reform era payer solutions platform overview mapping technologies to support the business operations. David Bonham of the Oracle Health Insurance organization moderated the panel and spoke on Oracle’s presence in healthcare and products for payers to help them drive efficiencies and gain a competitive advantage in an ever changing market. Oracle serves healthcare stakeholders with applications such as billing, rating and underwriting, analytics, CRM, enrollment, and products for processing of health insurance claims including pricing and benefits administration, as well as payment of providers through alternative, non-fee for service reimbursement methods. Oracle in Healthcare….Did you know? More than 80 healthcare payers run Oracle applications. More than 300 leading healthcare providers run Oracle applications. 10 out of the top 12 fortune Global 500 healthcare organizations run Oracle applications. For more information on Oracle solutions for healthcare payers, please visit oracle.com/insurance or these individual solution pages: Oracle Health Insurance Components Oracle Insurance Insbridge Rating and Underwriting Oracle Insurance Revenue Management and Billing Oracle Documaker Oracle Healthcare Oracle CRM Related Resources Webcast On Demand: Strategies for Success: Driving Business Transformation in the Growing Health Insurance Exchange Market Strategy Brief: Executing on the Individual Mandate: Opportunities and Challenges for Healthcare Payers White Paper: White paper: Navigating Alternative Provider Reimbursement Models of the Future Strategy Brief: Enterprise Rating Agility Improves Payer Response to Healthcare Reform Podcast: Technology Implications of Healthcare Reform Don’t forget to keep up with us year-round: Facebook: www.facebook.com/oracleinsurance Twitter: www.twitter.com/oracleinsurance YouTube: www.youtube.com/oracleinsurance

    Read the article

  • CodePlex Daily Summary for Sunday, October 06, 2013

    CodePlex Daily Summary for Sunday, October 06, 2013Popular ReleasesMedia Companion: Media Companion MC3.580b: Fixed IMDB Actor names and Actor Roles, empty <actor> entries in movie nfo, and actor scraping during initial movie scrape. Revision HistoryEvent-Based Components AppBuilder: AB3.Iteration.53: Iteration 53 (Feature): Allow drag&drop of existing component (flow, step) from component list to chart. Duplicate names are automatically recognized and solved. By the color of the draged component you can see what kind of component (flow or step) is currently draged. New: AddExistingComponentFlow, PartDragDropEventHandler, ExistingStepPreparerPulse: Pulse 0.6.7.3: Pulse is now accepting donations. To donate by Bitcoin or PayPal see https://pulse.codeplex.com/wikipage?title=Donations Lots of updates in v0.6.7.3: (Feature) New option allows you to disable wallpaper changing when a full screen application is running. This way Pulse doesn't slow down/lag your videos and games :) (Fix) Some users were getting Wallbase errors when logging in. This has been fixed. (Feature) Right click a provider and you can now make a copy of it by selecting the "Dupl...MoreTerra (Terraria World Viewer): MoreTerra 1.11.1: Release 1.11.1 =========== =Bug Fixes= =========== Added more tile blocks (Clouds, crimstone) Added items (binoculars, rope, Pirahna Gun) Added ores (Lead, Tin) Chests now work, I broke them yesterday. =============== =Known Issues= =============== I am having trouble with new background walls. So you will see a red outline for crimson then a pink inside. Same with where I think the queen bee lives.VG-Ripper & PG-Ripper: PG-Ripper 1.4.19: NEW: Added Option to login as Guest NEW: Added Menu Option to delete an Forum Account NEW: Added Support for "ImageTeam.org links FIXED: Fixed Ripping of http://forum.babeunion.com ForumsSimpleExcelReportMaker: Serm 0.03: SourceCode and Sample .Net Framework 3.5 AnyCPU compile.Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.fastJSON: v2.0.22: 2.0.22 - added .net 3.5 project - now compiling to 'output' directory - added signed assembly - version numbers will stay at 2.0.0.0 for drop in compatibility - file version will reflect the build number - bug fix deserializing to dictionaries instead of dataset when type is not definedResponsive SharePoint: Bootstrap 3 for SharePoint 2013 - Alpha 0.1: Bootstrap 3 for SharePoint 2013 Alpha version 0.1 NOTE - This is an alpha version, there are bound to be issues. Please help us solve them by contributing in our Discussion. Publishing - The source for Twitter Bootstrap 3.0.0 integrated into SharePoint 2013 for a site with Publishing enabled. Non-Publishing - A master page and branding assets for Twitter Bootstrap 3.0.0 integrated into SharePoint 2013 without Publishing enabled. PageLayoutSampleContent - Sample content for included page l...C++ AMP Conformance Test Suite: C++ AMP Conformance Test Suite 1.0.0: This release contains following changes from previous release: Removed the tests that were testing Microsoft specific behavior not part of open specification. The test suite now contains two folders, containing set of test cases, named ‘Tests’ and ‘TestsWithProp'. The set of tests under these two folders are identical except one difference. The set of test cases under directory ‘TestsWithProp’ makes use of ‘properties’ (which the compiler being tested should handle as mentioned in the open ...ASP.NET dhtmxGantt Class: dhtmlxGantt2.vb class: This is the latest class based on work performed. For more information read the project description and get the source files from dhtmlx.comExpressiveDataGenerators: Alpha 2: Fix serveral bugs, more testsQuickTestsFramework: 1.0.0: First release with stable API.VS Tiny Extension for TortoiseGit: 0.1c: + Icons revised + Push button disappeared when IDE loads the menu instead of toolbar. + Detected twice loading and prevented. + About box deprecated. + Next version will have major improvements. NEW: Visual Studio 2013 Support!BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????PayBox payment gateway provider for NB_Store: NB_Store_Gateway_01.00.02_PayBox: Paybox DNN module installMicrosoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1005September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix three issues: Up...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0New ProjectsBasic4Android (B4A) Charting Framework: dhtlmxCharts, GoogleCharts etc: Basic4Android (B4A) mobile charting framework.Client Meeting Tool: This site facilitate users to create and schedule meetings for an event.FoodScan: This app focuses on implementing diet monitoring application for Malaysian overweight and obese adolescents using AR technique on Windows Phone 8.Hello Team foundation server: Try to use team foundation server and compare it with GitKDG C# Password Generator: C# password generator developed by KDG.KDG's C# Password Generator: C# password generator that uses Random to create strong passwords based on user input.Meta: Meta is the EECS 111 programming language at Northwestern University. Meta is a dynamically-typed scheme-like language built on .NET. This is its home.Monoscript: Allows using Mono and C# for scripting on Unix. Source files are automatically compiled and executed. Caching is employed to avoid recompiling unchanged files.mtdsharp: Developed by Chris Hyndman, Alec KC, Lu Huang and Merrill Huang for CS 196 at the University of Illinois.Planr.me: Planr is a time management website currently in the early development stageProject Hermes: This very project is currently closed door and under core development. The project description and other works would be published soon.Remindme for Windows Phone 8: Simple, open source Pocket client for Windows Phone 8Remindme for WinRT: Simple, open source Pocket client for WinRT and Windows 8.SQL Server Periodic Table with Molecules: This a SQL Server Database intended to be used by students and researchers for Chemistry and Physics projects. Tesseract: The Tesseract Project aims to easily display and rotate 4 Dimensional Objects in 3D Test Case Manager: A Windows Application which extends Microsoft Test Manager. Features: * one click search * test case export * better test case reader * extended edit modeTest Project for Assignment 1: This is a test ProjectTorah File: Torah File is an project that allow you to use Torah Bible and Mishneh for the computer by type of the programming languages that will be able to use the ToUSAePay nopCommerce Payment Plugin: A simple plugin for nopCommerce to use the USAePay SOAP API interface for processing credit cards.Veterinaria Dr Leo: Este es nuestro Proyecto del curso Calidad y Pruebas de Software 2013-2 Arevalo Ticlla, Susan. Chalán Malca, Elvis. Cruzado Asencio, Gustavo.Visual Studio Test Extensions: The Visual Studio Test Extensions provides extensions and tools for the Visual Studio MSTest engine. It allows to execute unit tests in a separate AppDomain.WSAAD7COM1052: Central repository for 7COM1052 - Web Scripting & Application Development (COM)wscc2013online: this is a project related to Web Application Development at the University of Hertfordshire

    Read the article

  • Book Review: Programming Windows Identity Foundation

    - by DigiMortal
    Programming Windows Identity Foundation by Vittorio Bertocci is right now the only serious book about Windows Identity Foundation available. I started using Windows Identity Foundation when I made my first experiments on Windows Azure AppFabric Access Control Service. I wanted to generalize the way how people authenticate theirselves to my systems and AppFabric ACS seemed to me like good point where to start. My first steps trying to get things work opened the door to whole new authentication world for me. As I went through different blog postings and articles to get more information I discovered that the thing I am trying to use is the one I am looking for. As best security API for .NET was found I wanted to know more about it and this is how I found Programming Windows Identity Foundation. What’s inside? Programming WIF focuses on architecture, design and implementation of WIF. I think Vittorio is very good at teaching people because you find no too complex topics from the book. You learn more and more as you read and as a good thing you will find that you can also try out your new knowledge on WIF immediately. After giving good overview about WIF author moves on and introduces how to use WIF in ASP.NET applications. You will get complete picture how WIF integrates to ASP.NET request processing pipeline and how you can control the process by yourself. There are two chapters about ASP.NET. First one is more like introduction and the second one goes deeper and deeper until you have very good idea about how to use ASP.NET and WIF together, what issues you may face and how you can configure and extend WIF. Other two chapters cover using WIF with Windows Communication Foundation (WCF) band   Windows Azure. WCF chapter expects that you know WCF very well. This is not introductory chapter for beginners, this is heavy reading if you are not familiar with WCF. The chapter about Windows Azure describes how to use WIF in cloud applications. Last chapter talks about some future developments of WIF and describer some problems and their solutions. Most interesting part of this chapter is section about Silverlight. Who should read this book? Programming WIF is targeted to developers. It does not matter if you are beginner or old bullet-proof professional – every developer should be able to be read this book with no difficulties. I don’t recommend this book to administrators and project managers because they find almost nothing that is related to their work. I strongly recommend this book to all developers who are interested in modern authentication methods on Microsoft platform. The book is written so well that I almost forgot all things around me when I was reading the book. All additional tools you need are free. There is also Azure AppFabric ACS test version available and you can try it out for free. Table of contents Foreword Acknowledgments Introduction Part I Windows Identity Foundation for Everybody 1 Claims-Based Identity 2 Core ASP.NET Programming Part II Windows Identity Foundation for Identity Developers 3 WIF Processing Pipeline in ASP.NET 4 Advanced ASP.NET Programming 5 WIF and WCF 6 WIF and Windows Azure 7 The Road Ahead Index

    Read the article

  • Adobe Photoshop CS5 vs Photoshop CS5 extended

    - by Edward
    Adobe Photoshop has been an industry standard for most web designers & photographers worldwide. Photoshop CS5 has made photography editing much more refined and the composition process has become much easier than ever before.  To study the advantage of Photoshop CS5 extended over Photoshop CS5 we have written this comparison article, with both a Designer’s & Photographer’s perspective. Hopefully it shall help you in your buying/upgrade decision. Photoshop CS5 Photoshop CS5 has refining feature with powerful photography tools. It made editing process easy as fewer steps are involved to remove noise, add grain, create vignettes, correct lens distortions, sharpen, and create HDR images. It has quick image correction and color and tone control for professional purpose. Intelligent image editing and enhancement , extraordinary advanced compositing has made it a better tool than earlier versions for photographers. It allows users to accelerate workflow with fast performance on 64-bit Windows® and Mac hardware systems and smoother interactions due to more GPU-accelerated features. It also boasts of a state-of-the-art processing with Adobe Photoshop Camera Raw 6 and helps to maximize creative impact. It provides for tremendous precision and freedom. It allows user to easily select intricate image elements, such as hair and create realistic painting effects. It also allows to remove any image element and see the space fill in almost magically. It has easy access to core editing and streamlined work flow and flexible work ambience. It has creative tools and contents. Photoshop CS5 Extended Photoshop CS5 extended is quite innovative and has incorporated 3D elements to 2D artwork directly within digital imaging application, which enables user to do an easy on-ramp to 3D image creation. It also provides for 3D editing. It has intelligent image editing and enhancement. It offers advance composing and has extraordinary painting and drawing toolset. It provides for video and animation designing. It helps to work with specialized images for architecture, manufacturing, engineering, science, and medicine. Where CS5 extended scores over CS5 CS5 extended has many features, which were not included in CS5. These features make it score more over CS5. These features are: Technology for creating 3D extrusion 3D material library and picker Field depth for 3D 3D merging and scene composition improvements 3D workflow improvement Customization of 3D features Image based light source Shadow catcher for shadow creation Enhanced ray tracer Context sensitive widgets, which allows easy control of objects, lights and cameras. Overlays for materials and mesh boundaries Photoshop CS5 extended is far better than CS5 as it incorporates all the features of CS5 and have more advanced features. It allows 3D creation and editing and has other advanced tools to make it better. Redefining the Image-Editing Experience  : A Photographer’s point of View Photoshop CS5 delivers amazing features and creative options so even new users can perform advanced image manipulations and compositions. Breath taking image intelligence behind Content-Aware Fill magically removes any image detail or object, examines the surroundings and seamlessly fills in the space left behind. Lighting, tone and noise of the surrounding area can be matched. New Refine Edge makes nearly-impossible image selections possible. Masking was never easier, the toughest types of edges, such as hair and foliage seem easier to fix. To sum up following are few advantages of CS5 extended over previous versions 64-bit processing Content Aware Fill Refine Edge, “makes nearly-impossible image selections impossible” HDR Pro, including ghost artifact removal and HDR toning, which gives the look of HDR with a single exposure New brush options Improved image management with enhanced Adobe Bridge Lens corrections Improved black-and-white conversions Puppet Warp: Precisely reposition or warp any image element Adobe Camera Raw 6 Upgrade Buy Online Pricing and Availability Adobe Photoshop CS5 and CS5 Extended are available through Adobe Authorized Resellers & the Adobe Store. Estimated street price for Adobe Photoshop CS5 is US$699 and US$999 for Photoshop CS5 Extended. Upgrade pricing and volume licensing are also available. Related posts:10 Free Alternatives for Adobe Photoshop Software Web based Alternatives to Photoshop 15 Useful Adobe Illustrator Tutorials For Designers

    Read the article

  • New whitepaper, “Why Oracle Sun ZFS Storage Appliance for Oracle Databases?” now available.

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Databases are the backbone of today’s modern business providing transaction integrity for key business systems such as payment engines or providing the core of analytical data for decision-making. These diverse use cases require a flexible, high performance and highly available storage platform. The ZFS Storage Appliance is ideally suited with its architecture providing a platform flexible enough to meet the ever-changing availability, capacity and performance requirements from the business. In this just published white paper the authors provide both business and technical evidence of the suitability of the Oracle ZSF Storage Appliance as primary storage for Oracle Database 11gR2 environments. Click here to download the whitepaper.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >