Search Results

Search found 25946 results on 1038 pages for 'cost based optimizer'.

Page 141/1038 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • A design pattern for data binding an object (with subclasses) to asp.net user control

    - by Rohith Nair
    I have an abstract class called Address and I am deriving three classes ; HomeAddress, Work Address, NextOfKin address. My idea is to bind this to a usercontrol and based on the type of Address it should bind properly to the ASP.NET user control. My idea is the user control doesn't know which address it is going to present and based on the type it will parse accordingly. How can I design such a setup, based on the fact that, the user control can take any type of address and bind accordingly. I know of one method like :- Declare class objects for all the three types (Home,Work,NextOfKin). Declare an enum to hold these types and based on the type of this enum passed to user control, instantiate the appropriate object based on setter injection. As a part of my generic design, I just created a class structure like this :- I know I am missing a lot of pieces in design. Can anybody give me an idea of how to approach this in proper way.

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • Can Hudson branch promotion get based on project stability?

    - by Wayne
    Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project? Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode. IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version. We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle. Here's some background details: The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch. Our Hudson projects look like this: test - Only for testing a build, zero user visibility. integrate_deploy - Promotes a test project build to integrate branch and makes it available to public for UA testing. integrate - Repeatedly builds the integrate branch to determine if it's stable enough to promote to stable branch. This runs the builds and test hourly throughout every night. stable_deploy - Promotes an integrate project build to the stable branch and makes it public for users who want the latest and greatest. stable - Builds the stable branch once every night. After 2 weeks of successful builds (14 builds) it can go to "release candidate". And so on... it continues with "release candidate" and then "release".

    Read the article

  • is it possible to extract certain strings based off a predefined white-space count?

    - by s2xi
    So after several Advil's I think I need help I am trying to make a script that lets the user upload a .txt file, the file will look like this as an example EXT. DUNKIN' DONUTS - DAY Police vehicles remain in the parking lot. The determined female reporter from the courthouse steps, MELINDA FUENTES (32), interviews Comandante Chitt, who holds a napkin to his jaw, like he cut himself shaving. MELINDA < Comandante Chitt, how does it feel to get shot in the face? > COMANDANTE CHITT < Not too different than getting shot in the arm or leg. > MELINDA < Tell us what happened. > COMANDANTE CHITT < I parked my car. (indicates assault vehicle in donut shop) He aimed his weapon at my head. I fired seven shots. He stopped aiming his weapon at my head. > Melinda waits for more, but Chitt turns and walks away into the roped-off crime scene. Melinda is confused for a second, then resumes smiling. MELINDA < And there you have it... A man of few words. > Ok, so based off of this what I want to do is this: The PHP script looks at the file and counts 35 white spaces, since all files will have the same layout and never differ in white spaces I chose this as the best way to go. for every 35 white spaces extract character 36 until the end of line. Then tally up $character++ so in the end the output would look like ----------------------------------- It looks like you have 2 characters in your script Melinda Commandante Chitt ----------------------------------- using PHP to select distinct names, and use the strtolower() to lower case the strings and ucfirst() to make the first letter upper-case thats my project, I'm at the stage where I'm going crazy trying to figure out how to count white-spaces and everything after that white space until the first white-space after the word IS a character name

    Read the article

  • How can I structure my MustacheJS template to add dynamic classes based on the values from a JSON file?

    - by JGallardo
    OBJECTIVE To build an app that allows the user to search for locations. CURRENT STATE At the moment the locations listed are few, so they are just all presented when landing on the "dealers" page. BACKGROUND Previously there were only about 50 showrooms carrying a product we sell, so a static HTML page was fine. And displays as But the page size grew to about 1500 lines of code after doing this. We have gotten more and need a scalable solution so that we can add many more dealers fast. In other projects, I have previously used MustacheJS and to load in values from a JSON file. I know the ideally this will be an AJAX application. Perhaps I might be better off with database here? Below is what I have in mind so far, and it "works" up to a certain point, but seems not to be anywhere near the most sustainable solution that can be efficiently scaled. HTML <a id="{{state}}"></a> <div> <h4>{{dealer}} : {{city}}, {{state}} {{l_type}}</h4> <div class="{{icon_class}}"> <ul> <li><i class="icon-map-marker"></i></li> <li><i class="icon-phone"></i></li> <li><i class="icon-print"></i></li> </ul> </div> <div class="listingInfo"> <p>{{street}} <br>{{suite}}<br> {{city}}, {{state}} {{zip}}<br> Phone: {{phone}}<br> {{toll_free}}<br> {{fax}} </p> </div> </div> <hr> JSON { "dealers" : [ { "dealer":"Benco Dental", "City":"Denver", "state":"CO", "zip":"80112", "l_type":"Showroom", "icon_class":"listingIcons_3la", "phone":"(303) 790-1421", "toll_free":null, "fax":"(303) 790-1421" }, { "dealer":"Burkhardt Dental Supply", "City":"Portland", "state":"OR", "zip":"97220", "l_type":"Showroom", "icon_class":"listingIcons", "phone":" (503) 252-9777", "toll_free":"(800) 367-3030", "fax":"(866) 408-3488" } ]} CHALLENGES The CSS class wrapping the ul will vary based on how many fields there are. In this case there are 3, so the class is "listingIcons_3la" The "toll free" number section should only show up if in fact, there is a toll free number. the fax number should only show up if there is a value for a fax number.

    Read the article

  • How do you select form elements in JQuery based upon an html table?

    - by Swoop
    I am working on some ASP.NET web forms which involves some dynamic generation, and I need to add some onClick helpers on the client side. I have a basic outline of something working, except for one huge problem. There are multiple HTML tables, each generated by a different ASP.NET web control. Each table can contain overlapping field names, which is causing a problem with my JQuery click event handlers. The click event handler is linking to unintended form fields in addition to the intended form field. I have provided a simplified sample version of the code below. This code is trying to set the value of textbox box1 when a particular radiobutton is selected in the table with id=thing1. Obviously, the jquery code will be triggered for the form fields in both tables. The tables are dynamically added to the webpage based upon different conditions. It is possible that no tables will be loaded, only 1 table, or both tables might load. In the future, other tables could be added. Each table comes from a different .net web control. Other than renaming the form fields to make sure they are unique across all user controls, is there a way to have JQuery act only on the intended form fields? In other words, could the table ID be incorporated into the JQuery code in a manner that does not become a nightmare to maintain later? <script> $(document).ready(function() { $("[id$=radio1_0]").click(function() { $("[id$=box1]").attr("value", ""); }); $("[id$=radio1_1]").click(function() { $("[id$=box1]").attr("value", "N/A"); }); </script> <table id="thing1"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </td></tr> </table> <table id="thing2"> <tr><td> <radiobuttonlist id="radio1"/> <listitem>yes</listitem> <listitem>no</listitem> </td></tr> <tr><td> <textbox id="box1"/> </tr></td> </table>

    Read the article

  • PHP Based session variable not retaining value. Works on localhost, but not on server.

    - by Foo
    I've been trying to debug this problem for many hours, but to no avail. I've been using PHP for many years, and got back into it after long hiatus, so I'm still a bit rusty. Anyways, my $_SESSION vars are not retaining their value for some reason that I can't figure out. The site worked on localhost perfectly, but uploading it to the server seemed to break it. First thing I checked was the PHP.ini server settings. Everything seems fine. In fact, my login system is session based and it works perfectly. So now that I know $_SESSIONS are working properly and retaining their value for my login, I'm presuming the server is setup and the problem is in my script. Here's a stripped version of the code that's causing a problem. $type, $order and $style are not being retained after they are set via a GET variable. The user clicks a link, which sets a variable via GET, and this variable is retained for the remainder of their session. Is there some problem with my logic that I'm not seeing? <?php require_once('includes/top.php'); //first line includes a call to session_start(); require_once('includes/db.php'); $type = filter_input(INPUT_GET, 't', FILTER_VALIDATE_INT); $order = filter_input(INPUT_GET, 'o', FILTER_VALIDATE_INT); $style = filter_input(INPUT_GET, 's', FILTER_VALIDATE_INT); /* According to documentation, filter_input returns a NULL when variables are undefined. So, if t, o, or s are not set via URL, $type, $order and $style will be NULL. */ print_r($_SESSION); /* All other sessions, such as the login session, etc. are displayed here. After the sessions are set below, they are displayed up here to... simply with no value. This leads me to believe the problem is with the code below, perhaps? */ // If $type is not null (meaning it WAS set via the get method above) // or it's false because the validation failed for some reason, // then set the session to the $type. I removed the false check for simplicity. // This code is being successfully executed, and the data is being stored... if(!is_null($type)) { $_SESSION['type'] = $type; } if(!is_null($order)) { $_SESSION['order'] = $order; } if(!is_null($style)) { $_SESSION['style'] = $style; } $smarty->display($template); ?> If anyone can point me in the right direction, I'd greatly appreciate it. Thanks.

    Read the article

  • How do you hide an image tag based on an ajax response?

    - by Chris
    What is the correct jquery statement to replace the "//Needed magic" comments below so that the image tags are hidden or unhidden based on the AJAX responses? <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>JQuery</title> <style type="text/css"> .isSolvedImage{ width: 68px; height: 47px; border: 1px solid red; cursor: pointer; } </style> <script src="_js/jquery-1.4.2.min.js" type="text/javascript"> </script> </head> <body> <div id='true1.txt' class='isSolvedImage'> <img src="_images/solved.png"> </div> <div id='false1.txt' class='isSolvedImage'> <img src="_images/solved.png"> </div> <div id='true2.txt' class='isSolvedImage'> <img src="_images/solved.png"> </div> <div id='false2.txt' class='isSolvedImage'> <img src="_images/solved.png"> </div> <script type="text/javascript"> $(function(){ var getDivs = 0; //iterate div with class isSolvedImage $("div.isSolvedImage").each(function() { alert('div id--'+this.id); // send ajax requrest $.get(this.id, function(data) { // check if ajax response is 1 alert('div id--'+this.url+'--ajax response--'+data); if(data == 1){ alert('div id--'+this.url+'--Unhiding image--'); //Needed magic //Show image if data==1 } else{ alert('div id--'+this.url+'--Hiding image--'); //Needed magic //Hide image if data!=1 } }); }); }); </script> </body> </html>

    Read the article

  • How do I select all parenting items based on a given node's attribute in php's xPath?

    - by bakkelun
    I have an XML feed that looks something like this (excerpt): <channel> <title>Channel Name</title> <link>Link to the channel</link> <item> <title>Heading 1</title> <link>http://www.somelink.com?id=100</link> <description><![CDATA[ Text here ]]></description> <publishDate>Fri, 03 Apr 2009 10:00:00</publishDate> <guid>http://www.somelink.com/read-story-100</guid> <category domain="http://www.somelink.com/?category=4">Category 1</category> </item> <item> <title>Heading 2</title> <link>http://www.somelink.com?id=110</link> <description><![CDATA[ Text here ]]></description> <publishDate>Fri, 03 Apr 2009 11:00:00</publishDate> <guid>http://www.somelink.com/read-story-110</guid> <category domain="http://www.somelink.com/?category=4">Category 1</category> </item> <channel> That's the rough of it. I'm using this piece of PHP (excerpt): $xml = simple_xml_load_file($xmlFile); $xml->xpath($pattern); Now I want to get all ITEM-nodes (with their children) based on that pesky "domain" attribute in the category node, but no matter what I try it does-not-work. The closest I got was "//category[@domain= 'http://www.somelink.com/?category=4']" The expression I tried gave me this result: [0] => SimpleXMLElement Object ( [@attributes] => Array ( [domain] => http://www.somelink.com/?category=4 ) [0] => Category 1 [1] => SimpleXMLElement Object ( [@attributes] => Array ( [domain] => http://www.somelink.com/?category=4 ) [0] => Category 1 The expression should contain all childrens of the two items in the example, but as you can see only the info in the category node is present, I want all the item nodes. Any help would be highly appreciated.

    Read the article

  • Ant build from Android-generated build file fails - how to fix?

    - by Eno
    Building our Android app from Ant fails with this error: [apply] [apply] UNEXPECTED TOP-LEVEL ERROR: [apply] java.lang.OutOfMemoryError: Java heap space [apply] at java.util.HashMap.<init>(HashMap.java:209) [apply] at java.util.HashSet.<init>(HashSet.java:86) [apply] at com.android.dx.ssa.Dominators.compress(Dominators.java:96) [apply] at com.android.dx.ssa.Dominators.eval(Dominators.java:132) [apply] at com.android.dx.ssa.Dominators.run(Dominators.java:213) [apply] at com.android.dx.ssa.DomFront.run(DomFront.java:84) [apply] at com.android.dx.ssa.SsaConverter.placePhiFunctions(SsaConverter.java:265) [apply] at com.android.dx.ssa.SsaConverter.convertToSsaMethod(SsaConverter.java:51) [apply] at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:100) [apply] at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:74) [apply] at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:269) [apply] at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:131) [apply] at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:85) [apply] at com.android.dx.command.dexer.Main.processClass(Main.java:297) [apply] at com.android.dx.command.dexer.Main.processFileBytes(Main.java:276) [apply] at com.android.dx.command.dexer.Main.access$100(Main.java:56) [apply] at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:228) [apply] at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:245) [apply] at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:130) [apply] at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:108) [apply] at com.android.dx.command.dexer.Main.processOne(Main.java:245) [apply] at com.android.dx.command.dexer.Main.processAllFiles(Main.java:183) [apply] at com.android.dx.command.dexer.Main.run(Main.java:139) [apply] at com.android.dx.command.dexer.Main.main(Main.java:120) [apply] at com.android.dx.command.Main.main(Main.java:87) BUILD FAILED Ive tried giving Ant more memory by setting ANT_OPTS="-Xms256m -Xmx512m". (This build machine has 1Gb RAM). Do I just need more memory or is there anything else I can try?

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Financial Management: Why Move to the Cloud?

    - by Kathryn Perry
    A guest post by Terrance Wampler, Vice President, Financials Product Strategy, Oracle I’ve spent my career designing and developing financial management systems, most of it at Oracle. Every single day I either meet with our customers or talk to them on the phone. The time is usually spent discussing various business challenges facing CFOs and Controllers, who are running Oracle’s Financials. Lately, we’ve been talking a lot about cloud computing and whether it makes sense for finance to go to the cloud. Here are some pros and cons that might help you make that decision. Let’s start with the benefits of cloud solutions. The first is savings. With cloud services, you pay only for those commodities that you use. That makes you feel like you're getting better value for your money. Plus, you can preserve your cash for your core business and you can get a better matching of expenses and revenues. So, at the top of the list is lower total cost of ownership. The second point has to do with optimization. With cloud services, you’ll need less IT infrastructure so you can optimize your IT resources for better-value, higher-end projects. This also leads to greater financial visibility, where there's a clear cost for the set of services or features replaced by cloud services. And, the last benefit is what I call acceleration. You can save money by speeding up the initialization and deployment of the project. You don't have to deal with IT infrastructure and you can start implementing right away. We did a quick survey of about 70 CFOs at the CFO Summit last month in New York City. We asked them why they were looking at cloud services, and not necessarily just for financials. The No. 1 response was perceived lower cost of ownership. But of course there are risks to consider. The first thing most people think about in the cloud is security and ownership of data. So, will your data really be safe? Can you meet your own privacy policy requirements? Do you really want your private financial data exposed? Do you trust the provider? Is what you see really your data? Do you own it or is it managed by someone else? Security is a big concern that comes with an emotional component. The next thing in the risk category is reliability. Is the provider proven? You’re taking what you have control over – for example, standards and policies and internal service level agreements – away from your IT department and giving it to someone else. Will you still be able to adapt to shifts in your business? Will the provider be able to grow with your business effectively? Reliability means having a provider that can give you the service infrastructure that you need. And then there’s performance, which has two components in terms of risk. Going forward, will the provider be able to scale the infrastructure or service level if you have new employees or new businesses? And second, will the price you negotiate and the rate you lock in cover additional costs and rising service fees? Another piece is cost. What happens if you don't get the service level you want? What if you end the service? What happens, if after a few years, you send the service out for bid and change service? Can you move your data? Can you move the applications? Do the integrations work? These are cost components people don’t always take into account. And, the final piece is the business case. The perception is that you can get started really quickly with cloud. It has a perceived lower cost of total ownership and it feels cool because it's cloud. But do you have a good business case for moving to the cloud? Your total cost of ownership is over three years; then you’ll renew it, so your TCO is six years. Have you compared that to other internal services that you’re offering? You might already have product that you can run this new business or division on. In that same survey at the CFO Summit, the execs thought the biggest perceived risks were security of data, ability to move data back, and the ability to create a business case to actually justify the risks. So that’s the list of pros and cons. Not to leave you hanging, I will do another post on how to balance these pros and cons and make the right decision for your business.

    Read the article

  • Neo4j Performing shortest path calculations on stored data

    - by paddydub
    I would like to store the following graph data in the database, graph.makeEdge( "s", "c", "cost", (double) 7 ); graph.makeEdge( "c", "e", "cost", (double) 7 ); graph.makeEdge( "s", "a", "cost", (double) 2 ); graph.makeEdge( "a", "b", "cost", (double) 7 ); graph.makeEdge( "b", "e", "cost", (double) 2 ); Then run the Dijskra algorighm from a web servlet, to find shortest path calculations using the stored graph data. Then I will print the path to a html file from the servlet. Dijkstra<Double> dijkstra = getDijkstra( graph, 0.0, "s", "e" );

    Read the article

  • How to fix "OutOfMemoryError: java heap space" while compiling MonoDroid App in MonoDevelop

    - by Rodja
    When I try to compile one of my projects, I recently get the following error: Tool /usr/bin/java execution started with arguments: -jar /Applications/android-sdk-mac_x86/platform-tools/lib/dx.jar --no-strict --dex --output=obj/Debug/android/bin/classes.dex obj/Debug/android/bin/classes /Developer/MonoAndroid/usr/lib/mandroid/platforms/android-8/mono.android.jar FlurryAnalytics/Jars/FlurryAgent.jar Jars/android-support-v4.jar UNEXPECTED TOP-LEVEL ERROR: java.lang.OutOfMemoryError: Java heap space at com.android.dx.rop.code.RegisterSpecSet.<init>(RegisterSpecSet.java:49) at com.android.dx.rop.code.RegisterSpecSet.mutableCopy(RegisterSpecSet.java:383) at com.android.dx.ssa.LocalVariableInfo.mutableCopyOfStarts(LocalVariableInfo.java:169) at com.android.dx.ssa.LocalVariableExtractor.processBlock(LocalVariableExtractor.java:104) at com.android.dx.ssa.LocalVariableExtractor.doit(LocalVariableExtractor.java:90) at com.android.dx.ssa.LocalVariableExtractor.extract(LocalVariableExtractor.java:56) at com.android.dx.ssa.SsaConverter.convertToSsaMethod(SsaConverter.java:50) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:99) at com.android.dx.ssa.Optimizer.optimize(Optimizer.java:73) at com.android.dx.dex.cf.CfTranslator.processMethods(CfTranslator.java:273) at com.android.dx.dex.cf.CfTranslator.translate0(CfTranslator.java:134) at com.android.dx.dex.cf.CfTranslator.translate(CfTranslator.java:87) at com.android.dx.command.dexer.Main.processClass(Main.java:487) at com.android.dx.command.dexer.Main.processFileBytes(Main.java:459) at com.android.dx.command.dexer.Main.access$400(Main.java:67) at com.android.dx.command.dexer.Main$1.processFileBytes(Main.java:398) at com.android.dx.cf.direct.ClassPathOpener.processArchive(ClassPathOpener.java:245) at com.android.dx.cf.direct.ClassPathOpener.processOne(ClassPathOpener.java:131) at com.android.dx.cf.direct.ClassPathOpener.process(ClassPathOpener.java:109) at com.android.dx.command.dexer.Main.processOne(Main.java:422) at com.android.dx.command.dexer.Main.processAllFiles(Main.java:333) at com.android.dx.command.dexer.Main.run(Main.java:209) at com.android.dx.command.dexer.Main.main(Main.java:174) at com.android.dx.command.Main.main(Main.java:91) Other projects build as expected. I think I need to increase the heap size for this java build step? But how?

    Read the article

  • Vector of Object Pointers, general help and confusion

    - by Staypuft
    Have a homework assignment in which I'm supposed to create a vector of pointers to objects Later on down the load, I'll be using inheritance/polymorphism to extend the class to include fees for two-day delivery, next day air, etc. However, that is not my concern right now. The final goal of the current program is to just print out every object's content in the vector (name & address) and find it's shipping cost (weight*cost). My Trouble is not with the logic, I'm just confused on few points related to objects/pointers/vectors in general. But first my code. I basically cut out everything that does not mater right now, int main, will have user input, but right now I hard-coded two examples. #include <iostream> #include <string> #include <vector> using namespace std; class Package { public: Package(); //default constructor Package(string d_name, string d_add, string d_zip, string d_city, string d_state, double c, double w); double calculateCost(double, double); void Print(); ~Package(); private: string dest_name; string dest_address; string dest_zip; string dest_city; string dest_state; double weight; double cost; }; Package::Package() { cout<<"Constucting Package Object with default values: "<<endl; string dest_name=""; string dest_address=""; string dest_zip=""; string dest_city=""; string dest_state=""; double weight=0; double cost=0; } Package::Package(string d_name, string d_add, string d_zip, string d_city, string d_state, string r_name, string r_add, string r_zip, string r_city, string r_state, double w, double c){ cout<<"Constucting Package Object with user defined values: "<<endl; string dest_name=d_name; string dest_address=d_add; string dest_zip=d_zip; string dest_city=d_city; string dest_state=d_state; double weight=w; double cost=c; } Package::~Package() { cout<<"Deconstructing Package Object!"<<endl; delete Package; } double Package::calculateCost(double x, double y){ return x+y; } int main(){ double cost=0; vector<Package*> shipment; cout<<"Enter Shipping Cost: "<<endl; cin>>cost; shipment.push_back(new Package("tom r","123 thunder road", "90210", "Red Bank", "NJ", cost, 10.5)); shipment.push_back(new Package ("Harry Potter","10 Madison Avenue", "55555", "New York", "NY", cost, 32.3)); return 0; } So my questions are: I'm told I have to use a vector of Object Pointers, not Objects. Why? My assignment calls for it specifically, but I'm also told it won't work otherwise. Where should I be creating this vector? Should it be part of my Package Class? How do I go about adding objects into it then? Do I need a copy constructor? Why? What's the proper way to deconstruct my vector of object pointers? Any help would be appreciated. I've searched for a lot of related articles on here and I realize that my program will have memory leaks. Using one of the specialized ptrs from boost:: will not be available for me to use. Right now, I'm more concerned with getting the foundation of my program built. That way I can actually get down to the functionality I need to create. Thanks.

    Read the article

  • Play a sound based on the button pressed on the IPhone/xcode.

    - by slickplaid
    I'm trying to play a sound based on which button is pressed using AVAudioPlayer. (This is not a soundboard or fart app.) I have linked all buttons using this code in the header file: @interface appViewController : UIViewController <AVAudioPlayerDelegate> { AVAudioPlayer *player; UIButton *C4; UIButton *Bb4; UIButton *B4; UIButton *A4; UIButton *Ab4; UIButton *As4; UIButton *G3; UIButton *Gb3; UIButton *Gs3; UIButton *F3; UIButton *Fs3; UIButton *E3; UIButton *Eb3; UIButton *D3; UIButton *Db3; UIButton *Ds3; UIButton *C3; UIButton *Cs3; } @property (nonatomic, retain) AVAudioPlayer *player; @property (nonatomic, retain) IBOutlet UIButton *C4; @property (nonatomic, retain) IBOutlet UIButton *B4; @property (nonatomic, retain) IBOutlet UIButton *Bb4; @property (nonatomic, retain) IBOutlet UIButton *A4; @property (nonatomic, retain) IBOutlet UIButton *Ab4; @property (nonatomic, retain) IBOutlet UIButton *As4; @property (nonatomic, retain) IBOutlet UIButton *G3; @property (nonatomic, retain) IBOutlet UIButton *Gb3; @property (nonatomic, retain) IBOutlet UIButton *Gs3; @property (nonatomic, retain) IBOutlet UIButton *F3; @property (nonatomic, retain) IBOutlet UIButton *Fs3; @property (nonatomic, retain) IBOutlet UIButton *E3; @property (nonatomic, retain) IBOutlet UIButton *Eb3; @property (nonatomic, retain) IBOutlet UIButton *D3; @property (nonatomic, retain) IBOutlet UIButton *Db3; @property (nonatomic, retain) IBOutlet UIButton *Ds3; @property (nonatomic, retain) IBOutlet UIButton *C3; @property (nonatomic, retain) IBOutlet UIButton *Cs3; - (IBAction) playNote; @end Buttons are all linked to the event "playNote" in interfaceBuilder and each note is linked to the proper referencing outlet according to note name. All *.mp3 sound files are named after the UIButton name (IE- C3 == C3.mp3). In my implementation file, I have this to play a only one note when the C3 button is pressed: #import "sonicfitViewController.h" @implementation appViewController @synthesize C3, Cs3, D3, Ds3, Db3, E3, Eb3, F3, Fs3, G3, Gs3, A4, Ab4, As4, B4, Bb4, C4; // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { NSString *path = [[NSBundle mainBundle] pathForResource:@"3C" ofType:@"mp3"]; NSLog(@"path: %@", path); NSURL *file = [[NSURL alloc] initFileURLWithPath:path]; AVAudioPlayer *p = [[AVAudioPlayer alloc] initWithContentsOfURL:file error:nil]; [file release]; self.player = p; [p release]; [player prepareToPlay]; [player setDelegate:self]; [super viewDidLoad]; } - (IBAction) playNote { [self.player play]; } Now, with the above I have two issues: First, the NSLog reports NULL and crashes when trying to play the file. I have added the mp3's to the resources folder and they have been copied and not just linked. They are not in an subfolder under the resources folder. Secondly, how can I set it up so that when say button C3 is pressed, it plays C3.mp3 and F3 plays F3.mp3 without writing duplicate lines of code for each different button? playNote should be like NSString *path = [[NSBundle mainBundle] pathForResource:nameOfButton ofType:@"mp3"]; instead of defining it specifically (@"C3"). Is there a better way of doing this and why does the *path report NULL and crash when I load the app? I'm pretty sure it's something as simple as adding additional variable inputs to - (IBAction) playNote:buttonName and putting all the code to call AVAudioPlayer in the playNote function but I'm unsure of the code to do this.

    Read the article

  • Policy-based template design: How to access certain policies of the class?

    - by dehmann
    I have a class that uses several policies that are templated. It is called Dish in the following example. I store many of these Dishes in a vector (using a pointer to simple base class), but then I'd like to extract and use them. But I don't know their exact types. Here is the code; it's a bit long, but really simple: #include <iostream> #include <vector> struct DishBase { int id; DishBase(int i) : id(i) {} }; std::ostream& operator<<(std::ostream& out, const DishBase& d) { out << d.id; return out; } // Policy-based class: template<class Appetizer, class Main, class Dessert> class Dish : public DishBase { Appetizer appetizer_; Main main_; Dessert dessert_; public: Dish(int id) : DishBase(id) {} const Appetizer& get_appetizer() { return appetizer_; } const Main& get_main() { return main_; } const Dessert& get_dessert() { return dessert_; } }; struct Storage { typedef DishBase* value_type; typedef std::vector<value_type> Container; typedef Container::const_iterator const_iterator; Container container; Storage() { container.push_back(new Dish<int,double,float>(0)); container.push_back(new Dish<double,int,double>(1)); container.push_back(new Dish<int,int,int>(2)); } ~Storage() { // delete objects } const_iterator begin() { return container.begin(); } const_iterator end() { return container.end(); } }; int main() { Storage s; for(Storage::const_iterator it = s.begin(); it != s.end(); ++it){ std::cout << **it << std::endl; std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? } return 0; } The tricky part is here, in the main() function: std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? How can I access the dessert? I don't even know the Dessert type (it is templated), let alone the complete type of the object that I'm getting from the storage. This is just a toy example, but I think my code reduces to this. I'd just like to pass those Dish classes around, and different parts of the code will access different parts of it (in the example: its appetizer, main dish, or dessert).

    Read the article

  • How to get updated automatically WPF TreeViewItems with values based on .Net class properties?

    - by ProgrammerManiac
    Good morning. I have a class with data derived from InotifyPropertyChange. The data come from a background thread, which searches for files with certain extension in certain locations. Public property of the class reacts to an event OnPropertyChange by updating data in a separate thread. Besides, there are described in XAML TreeView, based on HierarhicalDataTemplates. Each TextBlock inside templates supplied ItemsSource = "{Binding FoundFilePaths, Mode = OneWay, NotifyOnTargetUpdated = True}". enter code here <HierarchicalDataTemplate DataType = "{x:Type lightvedo:FilesInfoStore}" ItemsSource="{Binding FoundFilePaths, Mode=OneWay, NotifyOnTargetUpdated=True}"> <!--????? ??????????? ???? ??????--> <StackPanel x:Name ="TreeNodeStackPanel" Orientation="Horizontal"> <TextBlock Margin="5,5,5,5" TargetUpdated="TextBlockExtensions_TargetUpdated"> <TextBlock.Text> <MultiBinding StringFormat="Files with Extension {0}"> <Binding Path="FileExtension"/> </MultiBinding> </TextBlock.Text> </TextBlock> <Button x:Name="OpenFolderForThisFiles" Click="OnOpenFolderForThisFiles_Click" Margin="5, 3, 5, 3" Width="22" Background="Transparent" BorderBrush="Transparent" BorderThickness="0.5"> <Image Source="images\Folder.png" Height="16" Width="20" > </Image> </Button> </StackPanel> </HierarchicalDataTemplate> <HierarchicalDataTemplate DataType="{x:Type lightvedo:FilePathsStore}"> <TextBlock Text="{Binding FilePaths, Mode=OneWay, NotifyOnTargetUpdated=True}" TargetUpdated="OnTreeViewNodeChildren_Update" /> </HierarchicalDataTemplate> </TreeView.Resources> <TreeView.RenderTransform> <TransformGroup> <ScaleTransform/> <SkewTransform AngleX="-0.083"/> <RotateTransform/> <TranslateTransform X="-0.249"/> </TransformGroup> </TreeView.RenderTransform> <TreeView.BorderBrush> <LinearGradientBrush EndPoint="1,0.5" StartPoint="0,0.5"> <GradientStop Color="#FF74591F" Offset="0" /> <GradientStop Color="#FF9F7721" Offset="1" /> <GradientStop Color="#FFD9B972" Offset="0.49" /> </LinearGradientBrush> </TreeView.BorderBrush> </TreeView> Q: Why is the data from a class derived from INotifyPropertyChange does not affect the display of tree items. Do I understand: The interface will make INotifyPropertyChange be automatically redrawn TreeViewItems or do I need to manually carry out this operation? Currently TreeViewItems not updated and PropertyChamged always null. A feeling that no subscribers to the event OnPropertyChanged.

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • How do I make a simple image-based button with visual states in Silverlight 3?

    - by Jacob
    At my previous company, we created our RIAs using Flex with graphical assets created in Flash. In Flash, you could simply lay out your graphics for different states, i.e. rollover, disabled. Now, I'm working on a Silverlight 3 project. I've been given a bunch of images that need to serve as the graphics for buttons that have a rollover, pressed, and normal state. I cannot figure out how to simply create buttons with different images for different visual states in Visual Studio 2008 or Expression Blend 3. Here's where I am currently. My button is defined like this in the XAML: <Button Style="{StaticResource MyButton}"/> The MyButton style appears as follows: <Style x:Key="MyButton" TargetType="Button"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Image Source="/Assets/Graphics/mybtn_up.png" Width="54" Height="24"> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"/> <VisualState x:Name="Unfocused"/> </VisualStateGroup> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"/> <VisualState x:Name="MouseOver"/> <VisualState x:Name="Pressed"/> <VisualState x:Name="Disabled"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups> </Image> </ControlTemplate> </Setter.Value> </Setter> </Style> I cannot figure out how to assign a different template to different states, nor how to change the image's source based on which state I'm in. How do I do this? Also, if you know of any good documentation that describes how styles work in Silverlight, that would be great. All of the search results I can come up with are frustratingly unhelpful. Edit: I found a way to change the image via storyboards like this: <Style x:Key="MyButton" TargetType="Button"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Image Source="/Assets/Graphics/mybtn_up.png" Width="54" Height="24" x:Name="Image"> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="FocusStates"> <VisualState x:Name="Focused"/> <VisualState x:Name="Unfocused"/> </VisualStateGroup> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"/> <VisualState x:Name="MouseOver"> <Storyboard Storyboard.TargetName="Image" Storyboard.TargetProperty="Source"> <ObjectAnimationUsingKeyFrames> <DiscreteObjectKeyFrame KeyTime="0" Value="/Assets/Graphics/mybtn_over.png"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="Pressed"> <Storyboard Storyboard.TargetName="Image" Storyboard.TargetProperty="Source"> <ObjectAnimationUsingKeyFrames> <DiscreteObjectKeyFrame KeyTime="0" Value="/Assets/Graphics/mybtn_active.png"/> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="Disabled"/> </VisualStateGroup> </VisualStateManager.VisualStateGroups> </Image> </ControlTemplate> </Setter.Value> </Setter> </Style> However, this seems like a strange way of doing things to me. Is there a more standard way of accomplishing this?

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • Common business drivers that lead to creating and sustaining a project

    Common business drivers that lead to creating and sustaining a project include and are not limited to: cost reduction, increased return on investment (ROI), reduced time to market, increased speed and efficiency, increased security, and increased interoperability. These drivers primarily focus on streamlining and reducing cost to make a company more profitable with less overhead. According to Answers.com cost reduction is defined as reducing costs to improve profitability, and may be implemented when a company is having financial problems or prevent problems. ROI is defined as the amount of value received relative to the amount of money invested according to PayperclickList.com.  With the ever increasing demands on businesses to compete in today’s market, companies are constantly striving to reduce the time it takes for a concept to become a product and be sold within the global marketplace. In business, some people say time is money, so if a project can reduce the time a business process takes it in fact saves the company which is always good for the bottom line. The Social Security Administration states that data security is the protection of data from accidental or intentional but unauthorized modification, destruction. Interoperability is the capability of a system or subsystem to interact with other systems or subsystems. In my personal opinion, these drivers would not really differ for a profit-based organization, compared to a non-profit organization. Both corporate entities strive to reduce cost, and strive to keep operation budgets low. However, the reasoning behind why they want to achieve this does contrast. Typically profit based organizations strive to increase revenue and market share so that the business can grow. Alternatively, not-for-profit businesses are more interested in increasing their reach within communities whether it is to increase annual donations or invest in the lives of others. Success or failure of a project can be determined by one or more of these drivers based on the scope of a project and the company’s priorities associated with each of the drivers. In addition, if a project attempts to incorporate multiple drivers and is only partially successful, then the project might still be considered to be a success due to how close the project was to meeting each of the priorities. Continuous evaluation of the project could lead to a decision to abort a project, because it is expected to fail before completion. Evaluations should be executed after the completion of every software development process stage. Pfleeger notes that software development process stages include: Requirements Analysis and Definition System Design Program Design Program Implementation Unit Testing Integration Testing System Delivery Maintenance Each evaluation at every state should consider all the business drivers included in the scope of a project for how close they are expected to meet expectations. In addition, minimum requirements of acceptance should also be included with the scope of the project and should be reevaluated as the project progresses to ensure that the project makes good economic sense to continue. If the project falls below these benchmarks then the project should be put on hold until it does make more sense or the project should be aborted because it does not meet the business driver requirements.   References Cost Reduction Program. (n.d.). Dictionary of Accounting Terms. Retrieved July 19, 2009, from Answers.com Web site: http://www.answers.com/topic/cost-reduction-program Government Information Exchange. (n.d.). Government Information Exchange Glossary. Retrieved July 19, 2009, from SSA.gov Web site: http://www.ssa.gov/gix/definitions.html PayPerClickList.com. (n.d.). Glossary Term R - Pay Per Click List. Retrieved July 19, 2009, from PayPerClickList.com Web site: http://www.payperclicklist.com/glossary/termr.html Pfleeger, S & Atlee, J.(2009). Software Engineering: Theory and Practice. Boston:Prentice Hall Veluchamy, Thiyagarajan. (n.d.). Glossary « Thiyagarajan Veluchamy’s Blog. Retrieved July 19, 2009, from Thiyagarajan.WordPress.com Web site: http://thiyagarajan.wordpress.com/glossary/

    Read the article

  • Oracle(R) Buys Pre-Paid Software Assets From eServGlobal

    - by Paulo Folgado
    Oracle to Deliver Scalable Carrier-Grade Pre-Paid Solution Based on Open, Flexible IT-Based Platform News Facts ·        Oracle has agreed to acquire certain pre-paid assets of eServGlobal, a provider of advanced IT-based, pre-paid charging solutions for the communications industry. ·        eServGlobal's Universal Service Platform (USP) includes a pre-paid charging application, a network-services platform and a messaging gateway. The ChargingMax, NumberMax, uVOMS, MessageMax, PromoMax Express and Social Relationship Management software currently supports more than 25 tier-one customers including the world's largest IT-based installation of pre-paid services. ·        The combination of Oracle Communications Billing and Revenue Management and the USP applications is expected to accelerate the shift from network- to IT-based pre-paid systems by providing the first convergent, open IT-based platform from a leading business software and hardware systems company. ·        Customers are expected to benefit from traditional carrier-grade, pre-paid service authorization with IT-grade flexibility that supports any service or network, is easier to deploy and maintain and delivers an overall lower total cost of ownership. ·        The transaction is expected to close in the second half of this year. Supporting Quote ·        "The majority of mobile phone users worldwide use pre-paid plans, and that number is growing exponentially. Oracle Communications applications combined with the pre-paid software assets from eServGlobal will provide our customers with highly available and scalable carrier-grade, pre-paid software on an open, convergent platform. This will enable our customers to deliver traditional pre-paid voice services and easily introduce hybrid pre-paid and post-paid plans with targeted pricing, promotions and service bundles that include voice, data and network services," said Liam Maxwell, vice president of products, Oracle Communications. Supporting Resources About Oracle and eServGlobal USP General Presentation FAQ

    Read the article

  • OEG11gR2 integration with OES11gR2 Authorization with condition

    - by pgoutin
    Introduction This OES use-case has been defined originally by Subbu Devulapalli (http://accessmanagement.wordpress.com/).  Based on this OES museum use-case, I have developed the OEG11gR2 policy able to deal with the OES authorization with condition. From an OEG point of view, the way to deal with OES condition is to provide with the OES request some Environmental / Context Attributes.   Museum Use-Case  All painting in the museum have security sensors, an alarm goes off when a person comes too close a painting. The employee designated for maintenance needs to use their ID and disable the alarm before maintenance. You are the Security Administrator for the museum and you have been tasked with creating authorization policies to manage authorization for different paintings. Your first task is to understand how paintings are organized. Asking around, you are surprised to see that there isno formal process in place, so you need to start from scratch. the museum tracks the following attributes for each painting 1. Name of the work 2. Painter 3. Condition (good/poor) 4. Cost You compile the list of paintings  Name of Painting  Painter  Paint Condition  Cost  Mona Lisa  Leonardo da Vinci  Good  100  Magi  Leonardo da Vinci  Poor  40  Starry Night  Vincent Van Gogh  Poor  75  Still Life  Vincent Van Gogh  Good  25 Being a software geek who doesn’t (yet) understand art, you feel that price(or insurance price) of a painting is the most important criteria. So you feel that based on years-of-experience employees can be tasked with maintaining different paintings. You decide that paintings worth over 50 cost should be only handled by employees with over 20 years of experience and employees with less than 10 years of experience should not handle any painting. Lets us start with policy modeling. All paintings have a common set of attributes and actions, so it will be good to have them under a single Resource Type. Based on this resource type we will create the actual resources. So our high level model is: 1) Resource Type: Painting which has action manage and the following four attributes a) Name of the work b) Painter c) Condition (good/poor) d) Cost 2) To keep things simple lets use painting name for Resource name (in real world you will try to use some identifier which is unique, because in future we may end up with more than one painting which has the same name.) 3) Create Resources based on the previous table 4) Create an identity attribute Experience (Integer) 5) Create the following authorization policies a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience OES Authorization Configuration We do need to create 2 authorization policies with specific conditions a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience We don’t need an explicit policy for Deny access to all paintings for employees with less than 10 year of experience, because Oracle Entitlements Server will automatically deny if there is no matching policy. OEG Policy The OEG policy looks like the following The 11g Authorization filter configuration is similar to :  The ${PAINTING_NAME} and ${USER_EXPERIENCE} variables are initialized by the "Retrieve from the HTTP header" filters for testing purpose. That's to say, under Service Explorer, we need to provide 2 attributes "Experience" & "Painting" following the OES 11g Authorization filter described above.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >