Search Results

Search found 25005 results on 1001 pages for 'sequential number'.

Page 161/1001 | < Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >

  • Project Euler #119 Make Faster

    - by gangqinlaohu
    Trying to solve Project Euler problem 119: The number 512 is interesting because it is equal to the sum of its digits raised to some power: 5 + 1 + 2 = 8, and 8^3 = 512. Another example of a number with this property is 614656 = 28^4. We shall define an to be the nth term of this sequence and insist that a number must contain at least two digits to have a sum. You are given that a2 = 512 and a10 = 614656. Find a30. Question: Is there a more efficient way to find the answer than just checking every number until a30 is found? My Code int currentNum = 0; long value = 0; for (long a = 11; currentNum != 30; a++){ //maybe a++ is inefficient int test = Util.sumDigits(a); if (isPower(a, test)) { currentNum++; value = a; System.out.println(value + ":" + currentNum); } } System.out.println(value); isPower checks if a is a power of test. Util.sumDigits: public static int sumDigits(long n){ int sum = 0; String s = "" + n; while (!s.equals("")){ sum += Integer.parseInt("" + s.charAt(0)); s = s.substring(1); } return sum; } program has been running for about 30 minutes (might be overflow on the long). Output (so far): 81:1 512:2 2401:3 4913:4 5832:5 17576:6 19683:7 234256:8 390625:9 614656:10 1679616:11 17210368:12 34012224:13 52521875:14 60466176:15 205962976:16 612220032:17

    Read the article

  • C++ - my loop keeps on adding up to 0

    - by user1756913
    so far here's my code #include <iostream> using namespace std; int main () { int num1 = 0; int num2 = 0; int sum = 0; for(num2 = num1; num1 <= num2; num1 +=2) sum += num1; num1 = num1 / 2 == 0? num1 : num1 + 1; num2 = num2 / 2 == 0? num2 : num2 - 1; cout << "Enter the First Number:" << endl; cin >> num1; cout << "Enter the Second Number:" << endl; cin >> num2; cout << "Total Sum: " << sum << endl; } //end for but the sum keeps on adding up to 0 :/ here's the problem. Create a program that displays the sum of the even numbers between and including two numbers entered by the user. In other words, if the user enters an even number, that number should be included in the sum. For example, if the user enters the integers 2 and 7, the sum is 12 (2 + 4 + 6). If the user enters the integers 2 and 8, the sum is 20 (2 + 4 + 6 + 8 ). Display an error message if the first integer entered by the user is greater than the second integer.

    Read the article

  • Practice of checking 'trueness' or 'equality' of conditional statements - does it really make sense?

    - by senthilkumar1033
    I remember many years back, when I was in school, one of my computer science teachers taught us that it was better to check for 'trueness' or 'equality' of a condition and not the negative stuff like 'inequality'. Let me elaborate - If a piece of conditional code can be written by checking whether an expression is true or false, we should check the 'trueness'. Example: Finding out whether a number is odd - it can be done in two ways: if ( num % 2 != 0 ) { // Number is odd } or if ( num % 2 == 1 ) { // Number is odd } When I was beginning to code, I knew that num % 2 == 0 implies the number is even, so I just put a ! there to check if it is odd. But he was like 'Don't check NOT conditions. Have the practice of checking the 'trueness' or 'equality' of conditions whenever possible.' And he recommended that I use the second piece of code. I am not for or against either but I just wanted to know - what difference does it make? Please don't reply 'Technically the output will be the same' - we ALL know that. Is it a general programming practice or is it his own programming practice that he is preaching to others?

    Read the article

  • Weird flex date issue

    - by CodeMonkey
    Flex is driving me CRAZY and I think it's some weird gotcha with how it handles leap years and none leap years. So here's my example. I have the below dateDiff method that finds the number of days or milliseconds between two dates. If I run the following three statements I get some weird issues. dateDiff("date", new Date(2010, 0,1), new Date(2010, 0, 31)); dateDiff("date", new Date(2010, 1,1), new Date(2010, 1, 28)); dateDiff("date", new Date(2010, 2,1), new Date(2010, 2, 31)); dateDiff("date", new Date(2010, 3,1), new Date(2010, 3, 30)); If you were to look at the date comparisons above you would expect to get 30, 27, 30, 29 as the number of days between the dates. There weird part is that I get 29 when comparing March 1 to March 31. Why is that? Is it something to do with February only having 28 days? If anyone has ANY input on this that would be greatly appreciated. public static function dateDiff( datePart:String, startDate:Date, endDate:Date ):Number { var _returnValue:Number = 0; switch (datePart) { case "milliseconds": _returnValue = endDate.time - startDate.time; break; case "date": // TODO: Need to figure out DST problem i.e. 23 hours at DST start, 25 at end. // Math.floor causes rounding down error with DST start at dayOfYear _returnValue = Math.floor(dateDiff("milliseconds", startDate, endDate)/(1000 * 60 * 60 * 24)); break; } return _returnValue; }

    Read the article

  • Finding the Largest and Smallest Integers In A Set- Basic

    - by Ka112324
    I'm kind of on the right track, however my output is not quite right. The program asks for the number of integers you have and then it asks for those numbers. For an example is says please enter the number of integers, you can put 3. And then you enter 3 numbers. I can't use arrays because I am a beginner student and we have not learned those yet. Using count is the only way that allows me to input integers. What do I need to add to my program? Again I am a general computer science student so I can't use anything advanced. I used include iostream, namespace int main and all that you just cant see it int data; int num; int count=0; int max=0; do { cout<<"Enter the number of intergers"<<endl; cin>>num; while (count<num) { cout<<"Please enter a number"<<endl; cin>>data; count++; if (data<min) { min=data; } if (data>max) { max=data; } } cout<<"Smallest integer:"<<min<<endl; cout<<"Largest integer:"<<max<<endl; cout<<"Would you like to continue?"<<endl; cin>>ans; } while ((ans=='y')||(ans=='Y')); return 0; }

    Read the article

  • Addiing captions above images in wordpress

    - by jacob
    So I added some code in my css and there are boxes that appear over every image that is attached to a post. I've wanted to number the images and show the image number in the box(1...n). I have this in my functions.php function count_images(){ global $post; $thePostID = $post-ID; $parameters = array( 'post_type' = 'attachment', 'post_parent' = $thePostID, 'post_mime_type' = 'image'); $attachments = get_children($parameters); $content = count($attachments); return $content; } add_filter('the_content','count_images'); function caption_image_callback($matches) { $c = count_images(); for ($i=1; $i <= $c; $i++) { if (is_single()) { return ''.$i.' '; } else { return ''; } } } function caption_image($post_body_content) { $post_body_content = preg_replace_callback("||","caption_image_callback",$post_body_content); return $post_body_content; } if ( current_user_can('edit_plugins') ) { add_filter('the_content', 'caption_image'); } If I run only count_images it will show the correct number of attached images to a post(let's say 15). But for some reason the number that is shown in the boxes over the images is always 1. I've seen this done on several blogs with just php so there has to be a way(even if I have to change my whole code). PS: I had to leave spaces in some places

    Read the article

  • Performing a MYSQL query based off of $_GET results

    - by Michael N
    When a user clicks an item on my items page, it takes them to blank page template using $_GET to pass the item brand and model through. I'd like to perform another MYSQL query when that user clicks through to populate the blank page with the product details from my database. I'd like to retrieve the single row using the model number (unique ID) to populate the page with the information. I've tried a couple of things but am having a little difficulty. On my blank item page, I have $brand = $_GET['Brand']; $modelnumber = $_GET['ModelNumber']; $query = mysql_query("SELECT * FROM items WHERE `Model Number` = '$modelnumber'"); $results = mysql_fetch_row($query); echo $results; I think having ''s around Model Number is causing troubles, but without them, I get a Warning: mysql_fetch_row() expects parameter 1 to be resource, boolean given error. My database columns looks like Brand | Model Number | Price | Description | Image A few other things I have tried include $query = mysql_query("SELECT * FROM item WHERE Model Number = $_GET['ModelNumber']"); Which gave me a syntax error. I've also tried concatenating the $_GET which gives me a mysql_fetch_row() expects parameter 1 to be resource, boolean given error Which leads me to believe that I'm also going about displaying the results incorrectly. I'm not sure if I need to put it in a where loop like I have with my previous page which displays all items in the database because this is just displaying one.

    Read the article

  • Need Multiple Sudoku Solutions

    - by user1567909
    I'm trying to output multiple sudoku solutions in my program. For example, when You enter this as input: 8..6..9.5.............2.31...7318.6.24.....73...........279.1..5...8..36..3...... .'s denote blank spaces. Numbers represent already-filled spaces. The output should be a sudoku solution like so: 814637925325149687796825314957318462241956873638274591462793158579481236183562749 However, I want to output multiple solutions. This would be all the solutions that should be printed: 814637925325149687796825314957318462241956873638274591462793158579481236183562749 814637925325941687796825314957318462241569873638472591462793158579184236183256749 834671925125839647796425318957318462241956873368247591682793154579184236413562789 834671925125839647796524318957318462241956873368247591682793154519482736473165289 834671925125839647796524318957318462241965873368247591682793154519482736473156289 But my program only prints out one solution. Below is my recursive solution to solving a sudoku solution bool sodoku::testTheNumber(sodoku *arr[9][9], int row, int column) { if(column == 9) { column = 0; row++; if(row == 9) return true; } if(arr[row][column]->number != 0) { return testTheNumber(arr, row, column+1); } for(int k = 1; k < 10; k++) { if(k == 10) { arr[row][column]->number = 0; return false; } if(rowIsValid(arr, k, row) && columnIsValid(arr, k, column) && boxIsValid(arr, k, row, column)) { arr[row][column]->number = k; if(testTheNumber(arr, row, column+1)==true) { return true; } arr[row][column]->number = 0; } } return false; } Could anyone help me come up with a way to print out multiple solutions? Thanks.

    Read the article

  • definition of wait-free (referring to parallel programming)

    - by tecuhtli
    In Maurice Herlihy paper "Wait-free synchronization" he defines wait-free: "A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless the execution speeds on the other processes." www.cs.brown.edu/~mph/Herlihy91/p124-herlihy.pdf Let's take one operation op from the universe. (1) Does the definition mean: "Every process completes a certain operation op in the same finite number n of steps."? (2) Or does it mean: "Every process completes a certain operation op in any finite number of steps. So that a process can complete op in k steps another process in j steps, where k != j."? Just by reading the definition i would understand meaning (2). However this makes no sense to me, since a process executing op in k steps and another time in k + m steps meets the definition, but m steps could be a waiting loop. If meaning (2) is right, can anybody explain to me, why this describes wait-free? In contrast to (2), meaning (1) would guarantee that op is executed in the same number of steps k. So there can't be any additional steps m that are necessary e.g. in a waiting loop. Which meaning is right and why? Thanks a lot, sebastian

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • JMX Based Monitoring - Part Four - Business App Server Monitoring

    - by Anthony Shorten
    In the last blog entry I talked about the Oracle Utilities Application Framework V4 feature for monitoring and managing aspects of the Web Application Server using JMX. In this blog entry I am going to discuss a similar new feature that allows JMX to be used for management and monitoring the Oracle Utilities business application server component. This feature is primarily focussed on performance tracking of the product. In first release of Oracle Utilities Customer Care And Billing (V1.x I am talking about), we used to use Oracle Tuxedo as part of the architecture. In Oracle Utilities Application Framework V2.0 and above, we removed Tuxedo from the architecture. One of the features that some customers used within Tuxedo was the performance tracking ability. The idea was that you enabled performance logging on the individual Tuxedo servers and then used a utility named txrpt to produce a performance report. This report would list every service called, the number of times it was called and the average response time. When I worked a performance consultant, I used this report to identify badly performing services and also gauge the overall performance characteristics of a site. When Tuxedo was removed from the architecture this information was also lost. While you can get some information from access.log and some Mbeans supplied by the Web Application Server it was not at the same granularity as txrpt or as useful. I am happy to say we have not only reintroduced this facility in Oracle Utilities Application Framework but it is now accessible via JMX and also we have added more detail into the performance tracking. Most of this new design was working with customers around the world to make sure we introduced a new feature that not only satisfied their performance tracking needs but allowed for finer grained performance analysis. As with the Web Application Server, the Business Application Server JMX monitoring is enabled by specifying a JMX port number in RMI Port number for JMX Business and initial credentials in the JMX Enablement System User ID and JMX Enablement System Password configuration options. These options are available using the configureEnv[.sh] -a utility. These credentials are shared across the Web Application Server and Business Application Server for authorization purposes. Once this is information is supplied a number of configuration files are built (by the initialSetup[.sh] utility) to configure the facility: spl.properties - contains the JMX URL, the security configuration and the mbeans that are enabled. For example, on my demonstration machine: spl.runtime.management.rmi.port=6750 spl.runtime.management.connector.url.default=service:jmx:rmi:///jndi/rmi://localhost:6750/oracle/ouaf/ejbAppConnector jmx.remote.x.password.file=scripts/ouaf.jmx.password.file jmx.remote.x.access.file=scripts/ouaf.jmx.access.file ouaf.jmx.com.splwg.ejb.service.management.PerformanceStatistics=enabled ouaf.jmx.* files - contain the userid and password. The default configuration uses the JMX default configuration. You can use additional security features by altering the spl.properties file manually or using a custom template. For more security options see JMX Security for more details. Once it has been configured and the changes reflected in the product using the initialSetup[.sh] utility the JMX facility can be used. For illustrative purposes I will use jconsole but any JSR160 complaint browser or client can be used (with the appropriate configuration). Once you start jconsole (ensure that splenviron[.sh] is executed prior to execution to set the environment variables or for remote connection, ensure java is in your path and jconsole.jar in your classpath) you specify the URL in the spl.runtime.management.connnector.url.default entry. For example: You are then able to track performance of the product using the PerformanceStatistics Mbean. The attributes of the PerformanceStatistics Mbean are counts of each object type. This is where this facility differs from txrpt. The information that is collected includes the following: The Service Type is captured so you can filter the results in terms of the type of service. For maintenance type services you can even see the transaction type (ADD, CHANGE etc) so you can see the performance of updates against read transactions. The Minimum and Maximum are also collected to give you an idea of the spread of performance. The last call is recorded. The date, time and user of the last call are recorded to give you an idea of the timeliness of the data. The Mbean maintains a set of counters per Service Type to give you a summary of the types of transactions being executed. This gives you an overall picture of the types of transactions and volumes at your site. There are a number of interesting operations that can also be performed: reset - This resets the statistics back to zero. This is an important operation. For example, txrpt is restricted to collecting statistics per hour, which is ok for most people. But what if you wanted to be more granular? This operation allows to set the collection period to anything you wish. The statistics collected will represent values since the last restart or last reset. completeExecutionDump - This is the operation that produces a CSV in memory to allow extraction of the data. All the statistics are extracted (see the Server Administration Guide for a full list). This can be then loaded into a database, a tool or simply into your favourite spreadsheet for analysis. Here is an extract of an execution dump from my demonstration environment to give you an idea of the format: ServiceName, ServiceType, MinTime, MaxTime, Avg Time, # of Calls, Latest Time, Latest Date, Latest User ... CFLZLOUL, EXECUTE_LIST, 15.0, 64.0, 22.2, 10, 16.0, 2009-12-16::11-25-36-932, ASHORTEN CILBBLLP, READ, 106.0, 1184.0, 466.3333333333333, 6, 106.0, 2009-12-16::11-39-01-645, BOBAMA CILBBLLP, DELETE, 70.0, 146.0, 108.0, 2, 70.0, 2009-12-15::12-53-58-280, BPAYS CILBBLLP, ADD, 860.0, 4903.0, 2243.5, 8, 860.0, 2009-12-16::17-54-23-862, LELLISON CILBBLLP, CHANGE, 112.0, 3410.0, 815.1666666666666, 12, 112.0, 2009-12-16::11-40-01-103, ASHORTEN CILBCBAL, EXECUTE_LIST, 8.0, 84.0, 26.0, 22, 23.0, 2009-12-16::17-54-01-643, LJACKMAN InitializeUserInfoService, READ_SYSTEM, 49.0, 962.0, 70.83777777777777, 450, 63.0, 2010-02-25::11-21-21-667, ASHORTEN InitializeUserService, READ_SYSTEM, 130.0, 2835.0, 234.85777777777778, 450, 216.0, 2010-02-25::11-21-21-446, ASHORTEN MenuLoginService, READ_SYSTEM, 530.0, 1186.0, 703.3333333333334, 9, 530.0, 2009-12-16::16-39-31-172, ASHORTEN NavigationOptionDescriptionService, READ_SYSTEM, 2.0, 7.0, 4.0, 8, 2.0, 2009-12-21::09-46-46-892, ASHORTEN ... There are other operations and attributes available. Refer to the Server Administration Guide provided with your product to understand the full et of operations and attributes. This is one of the many features I am proud that we implemented as it allows flexible monitoring of the performance of the product.

    Read the article

  • Fixing a SkyDrive Sync Disaster

    - by Rick Strahl
    For a few months I've been using SkyDrive to handle some basic synching tasks for a number of folders of mine. Specifically I've been dumping a few of my development folders into sky drive so I have a live running backup. It had been working just fine until about a week ago when something went awry. Badly! The idea is that the SkyDrive should sync files, but somewhere in its sync relationship it appears that SkyDrive got confused and assumed it needed to sync back older files to my local machine from the SkyDrive server. So rather than syncing my newer files to the server SkyDrive was pushing older files back to me. Because SkyDrive is so slow actually updating data it's not unusual for SkyDrive to be far behind in syncing and apparently some files were out of date by several months. Of course this is insidious because I didn't notice it for quite some time. I'd been happily working away on my files when a few days ago I noted a bunch of files with -RasXps (my machine name) popping up in various folders. At first I thought my Git repository was giving me a fit, but eventually realized that SkyDrive was actually pushing old files into my monitored folders. To be fair SkyDrive did make backups of the existing files, but by the time I caught it there were literally a few thousand files scattered on my machine that were now updated with old files from online. Here's what some of this looks like: If you look at the directory list you see a bunch of files with a -RasXps postfix appended to them. Those are the files that SkyDrive replaced and backed up on my machine. As you can see the backed up files are actually newer than the ones it pulled from the online SkyDrive. Unless I modified the files after they were updated they all were older than the existing local files. Not exactly how I imagined my synching would work. At first I started cleaning up this mess manually. In most cases the obvious solution was to simply delete the original file and replace with the -RasXps file, but not in all files. Some scrutiny was required and besides being a pain in the ass to rename files, quite frequently I had to dig out Beyond Compare to compare a few files where it wasn't quite clear what's wrong. I quickly realized that doing this by hand would be too hard for the large number of files that got hosed. Hacking together a small .NET Utility So, I figured the easiest way to tackle this is to write a small utility app that shows me all the mangled files that have backups, allows me to compare them and then quickly select and update them, removing the -RasXps file after choosing one of the two files. What I ended up with was a quick and dirty WinForms app that allows me to pick a root folder, and then shows all the -MachineName files: I start by picking a base folder and a template to search for - typically the -MachineName. Clicking Go brings up a list of all files in that folder and its subdirectories.  The list also displays the dates for the saved (-MachineName) file and the current file on disk, along with highlighting for the newer of the two. I can right click on any file and get a context menu pop up to open the folder in Explorer, or open Beyond Compare and view the two files to compare differences which I found very helpful for a number of files where I had modified the files after SkyDrive had updated to an old one. Typically these would be the green files (of which there were thankfully few). To 'fix' files I can select any number of files in the list, then use one of the three buttons on the right to apply an operation. I can use the Saved files - that is the backup file that SkyDrive created with the -MachineName extension (-RasXps above). Or I can use the current file, which is the file with the right name on disk right now and delete the -MachineName file. Or on some occasions I can just opt to delete both of them. For some files like binaries it's often easier to just delete and them be rebuild than choosing. For the most part the process involves accepting the pink files, and checking the few green files and see if any modifications were made since the file was updated incorrectly by SkyDrive. For me luckily those are few in number. Anyways, I thought I share this utility in case anybody else runs into this issue. I've included the VS2012 solution and all the source code so you can see how it works and you can tweak it as needed. The .NET 4.5 binaries are also included if you can't compile. Be warned though!  This rough code is provided as is and makes no guarantees or claims about file safety. All three of the action buttons on the form will delete data. It's a very rough utility and there are no safeguards that ask nicely before deleting files. I highly recommend you make a backup before you have at it. This tools is very narrow in focus, but it might also work with other sync issues from other vendors. I seem to remember that I had similar issues with SugarSync at some point and it too created the -MachineName style files on sync conflicts. Hope this helps somebody out so you can avoid wasting the better part of a full work day on this… Resources Download the Source Code and Binaries for SkyDrive Rescue© Rick Strahl, West Wind Technologies, 2005-2013Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

    - by Brian
    Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload. The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component. The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component. The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component. The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component. The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2. The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time. JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times. The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth. A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers. This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload. The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small. Performance Landscape JD Edwards EnterpriseOne Day in the Life Benchmark Online with Batch Workload This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine. System RackUnits Online Users Resp Time (sec) BatchConcur(# of UBEs) BatchRate(UBEs/m) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10 4 5000 0.88 19 10 9.0.1 Resp Time (sec) — Response time of online jobs reported in seconds Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute. JD Edwards EnterpriseOne Day in the Life Benchmark Online Workload Only These results are for the Day in the Life benchmark. They are run without any batch workload. System RackUnits Online Users ResponseTime (sec) Version SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10 M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10 4 5000 0.52 9.0.1 IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0 IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0 IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere Configuration Summary Hardware Configuration: 1 x SPARC T3-1 server 1 x 1.65 GHz SPARC T3 128 GB memory 16 x 300 GB 10000 RPM SAS 1 x Sun Flash Accelerator F20 PCIe Card, 92 GB 1 x 10 GbE NIC 1 x SPARC Enterprise M3000 server 1 x 2.86 SPARC64 VII+ 64 GB memory 1 x 10 GbE NIC 2 x StorageTek 2540 + 2501 Software Configuration: JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3 Oracle Database 11g Release 2 Oracle 11g WebLogic server 11g Release 1 version 10.3.2 Oracle Web Tier Utilities 11g Oracle Solaris 10 9/10 Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1 Oracle’s Universal Batch Engine - Short UBEs and Long UBEs Benchmark Description JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations. Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company. The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs. LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time. The UBE processes workload runs from the JD Enterprise Application server. Oracle's UBE processes come as three flavors: Short UBEs < 1 minute engage in Business Report and Summary Analysis, Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address, Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs. The UBE workload generates large numbers of PDF files reports and log files. The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently. One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently. The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner. Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute. Key Points and Best Practices Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers. See Also SPARC T3-1 oracle.com SPARC Enterprise M3000 oracle.com Oracle Solaris oracle.com JD Edwards EnterpriseOne oracle.com Oracle Database 11g Release 2 Enterprise Edition oracle.com Disclosure Statement Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

    Read the article

  • Building KPIs to monitor your business Its not really about the Technology

    When I have discussions with people about Business Intelligence, one of the questions the inevitably come up is about building KPIs and how to accomplish that. From a technical level the concept of a KPI is very simple, almost too simple in that it is like the tip of an iceberg floating above the water. The key to that iceberg is not really the tip, but the mass of the iceberg that is hidden beneath the surface upon which the tip sits. The analogy of the iceberg is not meant to indicate that the foundation of the KPI is overly difficult or complex. The disparity in size in meant to indicate that the larger thing that needs to be defined is not the technical tip, but the underlying business definition of what the KPI means. From a technical perspective the KPI consists of primarily the following items: Actual Value This is the actual value data point that is being measured. An example would be something like the amount of sales. Target Value This is the target goal for the KPI. This is a number that can be measured against Actual Value. An example would be $10,000 in monthly sales. Target Indicator Range This is the definition of ranges that define what type of indicator the user will see comparing the Actual Value to the Target Value. Most often this is defined by stoplight, but can be any indicator that is going to show a status in a quick fashion to the user. Typically this would be something like: Red Light = Actual Value more than 5% below target; Yellow Light = Within 5% of target either direction; Green Light = More than 5% higher than Target Value Status\Trend Indicator This is an optional attribute of a KPI that is typically used to show some kind of trend. The vast majority of these indicators are used to show some type of progress against a previous period. As an example, the status indicator might be used to show how the monthly sales compare to last month. With this type of indicator there needs to be not only a definition of what the ranges are for your status indictor, but then also what value the number needs to be compared against. So now we have an idea of what data points a KPI consists of from a technical perspective lets talk a bit about tools. As you can see technically there is not a whole lot to them and the choice of technology is not as important as the definition of the KPIs, which we will get to in a minute. There are many different types of tools in the Microsoft BI stack that you can use to expose your KPI to the business. These include Performance Point, SharePoint, Excel, and SQL Reporting Services. There are pluses and minuses to each technology and the right technology is based a lot on your goals and how you want to deliver the information to the users. Additionally, there are other non-Microsoft tools that can be used to expose KPI indicators to your business users. Regardless of the technology used as your front end, the heavy lifting of KPI is in the business definition of the values and benchmarks for that KPI. The discussion about KPIs is very dependent on the history of an organization and how much they are exposed to the attributes of a KPI. Often times when discussing KPIs with a business contact who has not been exposed to KPIs the discussion tends to also be a session educating the business user about what a KPI is and what goes into the definition of a KPI. The majority of times the business user has an idea of what their actual values are and they have been tracking those numbers for some time, generally in Excel and all manually. So they will know the amount of sales last month along with sales two years ago in the same month. Where the conversation tends to get stuck is when you start discussing what the target value should be. The actual value is answering the What and How much questions. When you are talking about the Target values you are asking the question Is this number good or bad. Typically, the user will know whether or not the value is good or bad, but most of the time they are not able to quantify what is good or bad. Their response is usually something like I just know. Because they have been watching the sales quantity for years now, they can tell you that a 5% decrease in sales this month might actually be a good thing, maybe because the salespeople are all waiting until next month when the new versions come out. It can sometimes be very hard to break the business people of this habit. One of the fears generally is that the status indicator is not subjective. Thus, in the scenario above, the business user is going to be fearful that their boss, just looking at a negative red indicator, is going to haul them out to the woodshed for a bad month. But, on the flip side, if all you are displaying is the amount of sales, only a person with knowledge of last month sales and the target amount for this month would have any idea if $10,000 in sales is good or not. Here is where a key point about KPIs needs to be communicated to both the business user and any user who might be viewing the results of that KPI. The KPI is just one tool that is used to report on business performance. The KPI is meant as a quick indicator of one business statistic. It is not meant to tell the entire story. It does not answer the question Why. Its primary purpose is to objectively and quickly expose an area of the business that might warrant more review. There is always going to be the need to do further analysis on any potential negative or neutral KPI. So, hopefully, once you have convinced your business user to come up with some target numbers and ranges for status indicators, you then need to take the next step and help them answer the Why question. The main question here to ask is, Okay, you see the indicator and you need to discover why the number is what is, where do you go?. The answer is usually a combination of sources. A sales manager might have some of the following items at their disposal (Marketing report showing a decrease in the promotional discounts for the month, Pricing Report showing the reduction of prices of older models, an Inventory Report showing the discontinuation of a particular product line, or a memo showing the ending of a large affiliate partnership. The answers to the question Why are never as simple as a single indicator value. Bring able to quickly get to this information is all about designing how a user accesses the KPIs and then also how easily they can get to the additional information they need. This is where a Dashboard mentality can come in handy. For example, the business user can have a dashboard that shows their KPIs, but also has links to some of the common reports that they run regarding Sales Data. The users boss may have the same KPIs on their dashboard, but instead of links to individual reports they are going to have a link to a status report that was created by the user that pulls together all the data about the KPI in a summary format the users boss can review. So some of the key things to think about when building or evaluating KPIs for your organization: Technology should not be the driving factor KPIs are of little value without some indicator for whether a value is good, bad or neutral. KPIs only give an answer to the Is this number good\bad? question Make sure the ability to drill into the Why of a KPI is close at hand and relevant to the user who is viewing the KPI. The KPI is a key business tool when defined properly to help monitor business performance across the enterprise in an objective and consistent manner. At times it might feel like the process of defining the business aspects of a KPI can sometimes be arduous, the payoff in the end can far outweigh the costs. Some of the benefits of going through this process are a better understanding of the key metrics for an organization and the measure of those metrics and a consistent snapshot of business performance that can be utilized across the organization. And I think that these are benefits to any organization regardless of the technology or the implementation.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • XBRL US Conference Highlights

    - by john.orourke(at)oracle.com
    Back in early November I had an opportunity to attend the XBRL US National Conference in Philadelphia.  At the event, XBRL US announced that Oracle had joined the initiative, so I had a chance to participate in a press conference and attend a number of sessions.  Oracle joined XBRL US so we can stay ahead of the standard and leverage it in our products, and to help drive awareness with customers and improve adoption of XBRL. There were roughly 250 attendees at the event, about half of which were vendors and consultants and the rest financial reporting staff from corporate filers.  Event sponsors included Ernst & Young, SWIFT and Fujitsu.  There were also a number of XBRL technology and service providers exhibiting at the conference.  On Monday Nov. 8th, the XBRL US Steering Committee meetings and Annual Members meeting and reception were held.  At the Annual Members meeting the big news was that current XBRL US President, Mark Bolgiano, is moving to a new position at Howard Hughes Medical Center.  Campbell Pryde, who had led the Taxonomy Development for XBRL US, is taking over as XBRL US President. Other items that were highlighted at the members meeting included: The US GAAP XBRL taxonomy is being used by over 1500 SEC filers and has now been handed over to the FASB to maintain and enhance 16 filer training events were held in 2010 XBRL Global Magazine was launched Corporate Actions proposal was submitted to the SEC with SWIFT in May XBRL Labs for iPhone, XBRL US Consistency Suite launched ISO 2022 Corporate Actions Alignment with XBRL achieved The XBRL Credit Rating taxonomy was accepted Tuesday Nov. 9th included Keynotes, General Sessions, Innovation Workshop for Governments and Securities Professionals, and an Opening Reception.  General sessions included: Lessons Learned from the SEC's rollout of XBRL.  More than 18,000 errors were identified in reviews of filings between June 2009 and September 2010.  Most of these related to negative values being used where they shouldn't have.  Also, the SEC feels there are too many taxonomy extensions being created - mostly in the Cash Flow Statements.  They emphasize using existing elements in the US GAAP taxonomy and advise filers not to  create extensions to improve the visual formatting of XBRL filings. Investors and XBRL - Setting the Standard for Data Quality.  In this panel discussion, the key learning was that CFA's, academics and the financial community are not using XBRL as expected.  The issues raised include the  accuracy and completeness of filings, number of taxonomy extensions, and limited number of tools available to help analyze XBRL data.  Another big issue that was raised is the lack of historic results in XBRL - most analysts need 10 quarters of historic data.  On the positive side, XBRL has the potential to eliminate re-keying of data and errors here and can improve analytic capabilities for financial analysts once more historic data is available and more companies are providing detailed tagging of their filings. A US Roadmap for XBRL Financial Reporting.  This was a panel discussion featuring Jeff Neumann(SEC), Campbell Pryde(XBRL US), and Louis Matherne(FASB).  Key points included the fact that XBRL is currently used by 1500 companies, with 8000 more companies coming in 2011.  XBRL for Mutual Fund Reporting will start in 2011 for 8000 funds, and a Credit Rating Taxonomy has now been submitted for review.  The XBRL tagging/filing process is improving each quarter - more education is helping here.  The FASB is looking at extensions to date, and potential additions to US GAAP taxonomy, while the SEC is evaluating filings for accuracy, consistency in tagging, and tools for analyzing data.  The big news is that the FASB 2011 US GAAP Taxonomy has been completed and reviewed by SEC.  The 2011 US GAAP Taxonomy supports new FASB accounting standards issued since 2009, has new taxonomy elements for certain industries (i.e airlines) and the elimination of 500 concepts.  (meaning they can't be used going forward but are still supported for historical comparison)  The 2011 US GAAP Taxonomy will be available for usage with Q2 2011 SEC filings.  More information about this can be found on the FASB web site.  http://www.fasb.org/home Accounting Firms and XBRL.  This session covered the Role of Audit Firms, which includes awareness and education, validation of XBRL filings, and in-house transition planning.  The main advice provided was that organizations should document XBRL mapping process, perform peer comparisons, and risk assessments on a regular basis. Wednesday Nov. 10th included more Keynotes, General Sessions on Corporate Actions, and XBRL Essentials Workshop Training for corporate filers.  The XBRL Essentials Training included: Getting Started Once you Have the Basics Detailed Footnote Tagging and Handling Tables Quality Control and Trust in the XBRL Process Bringing XBRL In-House:  What are the Options, What should you consider? The US GAAP Financial Reporting Taxonomy - Overview of the 2011 release The XBRL Essentials Training was well-attended with about 80 people.  This included a good overview of the SEC's XBRL mandate, limited liability issue, tagging levels, recommended planning process, internal vs. outsourced approach, and how to manage service providers.  I learned a lot from the session on detailed tagging.  This is the requirement that kicks in during a company's second year of XBRL filing with the SEC and applies to financial statements, footnotes and disclosures (it does not apply to MD&A, executive communications and other information).  The review of the Linkbase model, or dimensional table structure, was very interesting and can be complex to understand.  The key takeaway here is that using dimensional tables in XBRL filings can help limit the number of taxonomy extensions that are required.  The slides from this session are posted on the XBRL US web site. (http://xbrl.us/events/Pages/archive.aspx) For me, the main summary points and takeaways from the XBRL US conference are: XBRL for financial reporting has turned the corner and gone mainstream - with 1500 companies currently using it and 8000 more coming in 2011 The expected value is not being achieved by filers or consumers of XBRL data - this will improve when more companies are filing in XBRL, more history is available, and more software tools are available for analysis (hmm, sounds like an opportunity for Oracle) XBRL is becoming the global standard for all business communications beyond just the financials - i.e. adoption for mutual funds, corporate actions and others planned for the future If you would like to learn more about XBRL and the various training programs, services and software tools that are available check out the XBRL US web site and even better - become a member.  Here's a link:  http://xbrl.us/Pages/default.aspx

    Read the article

  • DRY and SRP

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/06/11/dry-and-srp.aspxKent Beck’s XP Simplicity Rules (aka Four Rules of Simple Design) are a prioritized list of rules that when applied to your code generally yield a great design.  As you’ll see from the above link the list has slightly evolved over time.  I find today they are usually listed as: All Tests Pass Don’t Repeat Yourself (DRY) Express Intent Minimalistic These are prioritized.  If your code doesn’t work (rule 1) then everything else is forfeit.  Go back to rule one and get the code working before worrying about anything else. Over the years the community have debated whether the priority of rules 2 and 3 should be reversed.  Some say a little duplication in the code is OK as long as it helps express intent.  I’ve debated it myself.  This recent post got me thinking about this again, hence this post.   I don’t think it is fair to compare “Expressing Intent” against “DRY”.  This is a comparison of apples to oranges.  “Expressing Intent” is a principal of code quality.  “Repeating Yourself” is a code smell.  A code smell is merely an indicator that there might be something wrong with the code.  It takes further investigation to determine if a violation of an underlying principal of code quality has actually occurred. For example “using nouns for method names”, “using verbs for property names”, or “using Booleans for parameters” are all code smells that indicate that code probably isn’t doing a good job at expressing intent.  They are usually very good indicators.  But what principle is the code smell of Duplication pointing to and how good of an indicator is it? Duplication in the code base is bad for a couple reasons.  If you need to make a change and that needs to be made in a number of locations it is difficult to know if you have caught all of them.  This can lead to bugs if/when one of those locations is overlooked.  By refactoring the code to remove all duplication there will be left with only one place to change, thereby eliminating this problem. With most projects the code becomes the single source of truth for a project.  If a production code base is inconsistent with a five year old requirements or design document the production code that people are currently living with is usually declared as the current reality (or truth).  Requirement or design documents at this age in a project life cycle are usually of little value. Although comparing production code to external documentation is usually straight forward, duplication within the code base muddles this declaration of truth.  When code is duplicated small discrepancies will creep in between the two copies over time.  The question then becomes which copy is correct?  As different factions debate how the software should work, trust in the software and the team behind it erodes. The code smell of Duplication points to a violation of the “Single Source of Truth” principle.  Let me define that as: A stakeholder’s requirement for a software change should never cause more than one class to change. Violation of the Single Source of Truth principle will always result in duplication in the code.  However, the inverse is not always true.  Duplication in the code does not necessarily indicate that there is a violation of the Single Source of Truth principle. To illustrate this, let’s look at a retail system where the system will (1) send a transaction to a bank and (2) print a receipt for the customer.  Although these are two separate features of the system, they are closely related.  The reason for printing the receipt is usually to provide an audit trail back to the bank transaction.  Both features use the same data:  amount charged, account number, transaction date, customer name, retail store name, and etcetera.  Because both features use much of the same data, there is likely to be a lot of duplication between them.  This duplication can be removed by making both features use the same data access layer. Then start coming the divergent requirements.  The receipt stakeholder wants a change so that the account number has the last few digits masked out to protect the customer’s privacy.  That can be solve with a small IF statement whilst still eliminating all duplication in the system.  Then the bank wants to take a picture of the customer as well as capture their signature and/or PIN number for enhanced security.  Then the receipt owner wants to pull data from a completely different system to report the customer’s loyalty program point total. After a while you realize that the two stakeholders have somewhat similar, but ultimately different responsibilities.  They have their own reasons for pulling the data access layer in different directions.  Then it dawns on you, the Single Responsibility Principle: There should never be more than one reason for a class to change. In this example we have two stakeholders giving two separate reasons for the data access class to change.  It is clear violation of the Single Responsibility Principle.  That’s a problem because it can often lead the project owner pitting the two stakeholders against each other in a vein attempt to get them to work out a mutual single source of truth.  But that doesn’t exist.  There are two completely valid truths that the developers need to support.  How is this to be supported and honour the Single Responsibility Principle?  The solution is to duplicate the data access layer and let each stakeholder control their own copy. The Single Source of Truth and Single Responsibility Principles are very closely related.  SST tells you when to remove duplication; SRP tells you when to introduce it.  They may seem to be fighting each other, but really they are not.  The key is to clearly identify the different responsibilities (or sources of truth) over a system.  Sometimes there is a single person with that responsibility, other times there are many.  This can be especially difficult if the same person has dual responsibilities.  They might not even realize they are wearing multiple hats. In my opinion Single Source of Truth should be listed as the second rule of simple design with Express Intent at number three.  Investigation of the DRY code smell should yield to the proper application SST, without violating SRP.  When necessary leave duplication in the system and let the class names express the different people that are responsible for controlling them.  Knowing all the people with responsibilities over a system is the higher priority because you’ll need to know this before you can express it.  Although it may be a code smell when there is duplication in the code, it does not necessarily mean that the coder has chosen to be expressive over DRY or that the code is bad.

    Read the article

  • Emtel Knowledge Series - Q2/2014

    From Cyber Island to Smart Mauritius Cyber Island? Smart Mauritius? - What is Emtel talking about? "With the majority of the population living in urban environments today, the concept of "Smart Cities" has become an urgent necessity. "Smart Cities" refer to an urban transformation which, by using latest ICT technologies makes cities more efficient. Many Governments are setting out ambitious plans to build the cities of the future based on massive connectivity, high bandwidth communications, intelligent sensors and analysis of huge volumes of data. Various researches have shown four key enablers for smart city success - Government leadership, suitable technology infrastructure, solid public-private partnerships and engaged citizens. It is around these enabling factors that telecoms companies can play a vital role in assisting governments to deliver on the smart city vision." The Emtel Knowledge Series goes in compliance with Emtel's 25th anniversary celebrations throughout the year and the master of ceremony, Kim Andersen, mentioned that there will be more upcoming events on a quarterly base. As a representative of the Mauritius Software Craftsmanship Community (MSCC) there was absolutely no hesitation to join in again. Following my visit to the first Emtel Knowledge Series workshop back in February this year, it was great to have another opportunity to meet and exchange with technology experts. But quite frankly what is it with those buzz words... As far as I remember and how it was mentioned "Cyber Island" is an old initiative from around 2005/2006 which has been refreshed in 2010. It implies the empowerment of Information & Communication Technologies (ICT) as an essential factor of growth by the government here in Mauritius. Actually, the first promotional period of Cyber Island brought me here but that's another story. The venue and its own problems Like last time the event was organised and held at the Conference Hall at Cyber Tower I in Ebene. As I've been working there for some years, I know about the frustrating situation of finding a proper parking. So, does Smart Island include better solutions for the search of parking spaces? Maybe, let's see whether I will be able to answer that question at the end of the article. Anyway, after circling around the tower almost two times, I finally got a decent space to put the car, without risking to get a ticket or damage actually. International speakers and their experience Once again, Emtel did a great job to get international expertise onto the stage to share their experience and vision on this kind of embarkment. Personally, I really appreciated the fact they were speakers of global reach and could provide own-experience knowledge. Johan Gott spoke about the fundamental change that the Swedish government ignited in order to move their society and workers' environment away from heavy industry towards a knowledge-based approach. Additionally, we spoke about the effort and transformation of New York City into a greener and more efficient Smart City. Given modern technology he also advised that any kind of available Big Data should be opened to the general public - this openness would provide a playground for anyone to garner new ideas and most probably solid solutions of which no one else thought about before. Emtel Knowledge Series on moving from Cyber Island to Smart Mauritus Later during the afternoon that exact statement regarding openness to and transparency of government-owned Big Data has been emphasised again by the Danish speaker Kim Andersen and his former colleague Mika Jantunen from Finland. Mika continued to underline the important role of the government to provide a solid foundation for a knowledge-based society and mentioned that Finnish citizens have a constitutional right to broadband connectivity. Next to free higher (tertiary) education Finland already produced a good number of innovations, among them are: First country to grant voting rights to women Free higher education Constitutional right to broadband connectivity Nokia Linux Angry Birds Sauna and others...  General access to internet via broadband and/or mobile connectivity is surely a key factor towards Smart Cities, or better said Smart Mauritius given the area dimensions and size of population. CTO Paul Valette gave the audience a brief overview of the essential role that Emtel will have to move Mauritius forward towards a knowledge-based and innovation-driven environment for its citizen. What I have seen looks really promising and with recently published information that Mauritians have 127% of mobile capacity - meaning more than 1 mobile, smartphone or tablet per person - it will be crucial to have the right infrastructure for these connected devices. How would it be possible to achieve a knowledge-based society? YouTube to the rescue!Seriously, gaining more knowledge will require to have fast access to educational course material as explained by Dr Kaviraj Sukon, General Director of the Open University of Mauritius. According to him a good number of high-profile universities in the world have opened their course libraries to the general public, among them EDX, Coursera and Open University. Nowadays, you're actually able and enabled to learn for and earn a BSc or even MSc certification on your own pace - no need to attend classed on campus. It was really impressive to see the number of available hours - more than enough for a life-long learning experience! {loadposition content_adsense} Networking in the name of MSCC As briefly mentioned above I was about to combine two approaches for this workshop. Of course, getting latest information and updates on Emtel services available, especially for my business here on the west coast of the island, but also to meet and greet new people for the MSCC. And I think it was very positive on both sides. Let me quickly describe some of the key aspects that happened during the day: Met with Arnaud Meslier and Kellie, both Microsoft to swap latest information on IT events. Hereby, I got an invite to Microsoft Windows Phone 8.1 Dev Camp. Got in touch with Arvin Lockee, Emtel to check our options to meet with the data team, and seizing the opportunity to have a visiting tour at the Emtel Data Centre. Had a great chat with Avinash Meetoo, Knowledge 7, Kim Andersen and Mika Jantunen about the situation of teaching and learning in general and specifically in the private sector here in Mauritius. Additionally, a number of various other interesting chats... Once again, I'm catching up on a couple of business cards in order to provide more background information about the MSCC, and to create a better awareness of MSCC within the local IT businesses. There is more to come soon!  Resume of the day The number of attendees during this event has been doubled or even tripled this time. The whole organisation has been improved massively and the combination of presentation and summarizing panel discussions was better than during the previous workshop back in February. Overall, once again a well-organised workshop and I'm already looking forward to join the next workshop in Q3. Update End of July we finally managed to visit the Emtel Data Centre in Arsenal. It was an interesting opportunity for some of our MSCC members.

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • C to C++ Conversion [closed]

    - by Annalyne
    Can someone convert this code to C++, pretty please? :( #include <stdio.h> #include <stdlib.h> #include <time.h> #define WEAPON_ROPE 10 #define WEAPON_REVOLVER 20 #define WEAPON_LEADPIPE 30 #define WEAPON_CANDLESTICK 40 #define WEAPON_KNIFE 50 #define WEAPON_WRENCH 60 #define PEOPLE_MRGREEN 100 #define PEOPLE_MSSCARLET 200 #define PEOPLE_CONLMUSTARD 300 #define PEOPLE_PROFPLUM 400 #define PEOPLE_MISPEACOCK 500 #define PEOPLE_MISWHITE 600 #define PLACE_KITCHEN 1 #define PLACE_HALL 2 #define PLACE_POOLROOM 3 #define PLACE_STUDY 4 #define PLACE_LOUNG 5 #define PLACE_LIBRARY 6 #define PLACE_CONSERVATORY 7 #define PLACE_DINING 8 #define PLACE_BILLIARDS 9 int main() { int die = 0; int players[6][9] = {{0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}}; int allCards[] = {WEAPON_ROPE, WEAPON_REVOLVER, WEAPON_LEADPIPE, WEAPON_CANDLESTICK, WEAPON_CANDLESTICK, WEAPON_KNIFE, WEAPON_WRENCH, PEOPLE_MRGREEN, PEOPLE_MSSCARLET, PEOPLE_CONLMUSTARD, PEOPLE_CONLMUSTARD, PEOPLE_PROFPLUM, PEOPLE_MISPEACOCK, PEOPLE_MISWHITE, PLACE_KITCHEN, PLACE_HALL, PLACE_POOLROOM, PLACE_STUDY, PLACE_LOUNG, PLACE_LIBRARY, PLACE_CONSERVATORY, PLACE_DINING, PLACE_BILLIARDS}; int deckSize = 23; // number of cards in allCards array int count; for (count = 0; count < deckSize; ++count) { printf(", %d", allCards[count]); } // End for // These three array's are so you can put a card back, if need be... int weaponCards[] = {WEAPON_ROPE, WEAPON_REVOLVER, WEAPON_LEADPIPE, WEAPON_CANDLESTICK, WEAPON_CANDLESTICK, WEAPON_KNIFE, WEAPON_WRENCH}; int weaponDeckSize = 7; int peopleCards[] = {PEOPLE_MRGREEN, PEOPLE_MSSCARLET, PEOPLE_CONLMUSTARD, PEOPLE_CONLMUSTARD, PEOPLE_PROFPLUM, PEOPLE_MISPEACOCK, PEOPLE_MISWHITE}; int peopleDeckSize = 7; int placeCards[] = {PLACE_KITCHEN, PLACE_HALL, PLACE_POOLROOM, PLACE_STUDY, PLACE_LOUNG, PLACE_LIBRARY, PLACE_CONSERVATORY, PLACE_DINING, PLACE_BILLIARDS}; int placeDeckSize = 9; srand(clock()); // seed rand() using clock() which gives // the current tick your processor is at... int killer[3]; // no need to initialize yet. killer[0-2] will initialize int deckShuffle = rand() % weaponDeckSize; // picks one number out of the deck killer[0] = weaponCards[deckShuffle]; allCards[deckShuffle] = 0; // Card drawn. No longer exists in deck deckShuffle = rand() % peopleDeckSize; // picks another random card out of the deck killer[1] = peopleCards[deckShuffle]; allCards[deckShuffle + weaponDeckSize] = 0; // Card drawn. No longer exists in deck deckShuffle = rand() % placeDeckSize; // randomly picks the last card needed killer[2] = placeCards[deckShuffle]; allCards[deckShuffle + weaponDeckSize + peopleDeckSize] = 0; // Card drawn. No longer exists in deck int numberOfCards = 0; printf("CLUE\n"); printf("written by John Schintone\n"); printf("Origonal game delvoped by Hasbro\n"); int numberOfPlayers = 0; while ((numberOfPlayers < 3) || (numberOfPlayers > 6)) { printf("How many players are Going to play :\n"); printf("[number] > "); scanf("%d",&numberOfPlayers); // A very fast if statement which only uses integers/char's switch(numberOfPlayers) { case 6: { numberOfCards = 3; } break; case 5: { numberOfCards = 4; } break; case 4: { numberOfCards = 5; } break; case 3: { numberOfCards = 6; } break; default: { printf("You must enter a number between 3 and 6...\n"); } // End default } // End switch } // End while int index1, index2; // Note: ++index1; is faster than index1++; and will almost always // produce better code (index1++ happens after this statement line. // ++index1 increments index1 before this statement line) for (index1 = 0; index1 < numberOfPlayers; ++index1) { printf("Player %d", index1); for (index2 = 0; index2 < numberOfCards; ++index2) { // Remember that allCards[deckShuffle] == 0 because we removed that // card ages ago... works out well, just don't forget you did that : ) while (allCards[deckShuffle] == 0) { deckShuffle = rand() % deckSize; } // End while players[index1][index2] = allCards[deckShuffle]; allCards[deckShuffle] = 0; // Card removed for after loop... printf(", %d", players[index1][index2]); switch(players[index1][index2]) { case WEAPON_ROPE: { } break; // Add more... case PEOPLE_MRGREEN: { } break; // Add more... case PLACE_KITCHEN: { } break; // Add more... default: { printf("Program has caught player %d cheating...", index1); } // End default } // End switch } // End for printf("\n"); } // End for printf("The killer is %d, with the %d, and in the %d \n\n", killer[0], killer[1], killer[2]); printf("Type h for this help... \n"); printf("Type e to escape... \n"); printf("Type r to roll the die... \n"); char command = '\0'; // \0 represents zero, or the null character while (command != 'e') { printf("[one character] > "); scanf("%c", &command); if (command == 'r') { die = rand() % 6 + 1; printf("Your number is: %d \n", die); } // end while if (command == 'h') { printf("Type h for this help... \n"); printf("Type e to escape... \n"); printf("Type r to roll the die... \n"); } // End if printf("\n"); } // End while return(0); // Success. Program worked ok } // End main() Function

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • Converting a WPFToolkit DataGrid from 1D list to 2D matrix

    - by user61073
    Hello - I am wondering if anyone has attempted the following or has an idea as to how to do it. I have a WPFToolkit DataGrid which is bound to an ObservableCollection of items. As such, the DataGrid is shown with as many rows in the ObservableCollection, and as many columns as I have defined in for the DataGrid. That all is good. What I now need is to provide another view of the same data, only, instead, the DataGrid is shown with as many cells in the ObservableCollection. So let's say, my ObservableCollection has 100 items in it. The original scenario showed the DataGrid with 100 rows and 1 column. In the modified scenario, I need to show it with 10 rows and 10 columns, where each cell shows the value that was in the original representation. In other words, I need to transform my 1D ObservableCollection to a 2D ObservableCollection and display it in the DataGrid. I know how to do that programmatically in the code behind, but can it be done in XAML? Let me simplify the problem a little, in case anybody can have a crack at this. The XAML below does the following: * Defines an XmlDataProvider just for dummy data * Creates a DataGrid with 10 columns o each column is a DataGridTemplateColumn using the same CellTemplate * The CellTemplate is a simple TextBlock bound to an XML element If you run the XAML below, you will find that the DataGrid ends up with 5 rows, one for each book, and 10 columns that have identical content (all showing the book titles). However, what I am trying to accomplish, albeit with a different data set, is that in this case, I would end up with one row, with each book title appearing in a single cell in row 1, occupying cells 0-4, and nothing in cells 5-9. Then, if I added more data and had 12 books in my XML data source, I would get row 1 completely filled (cells covering the first 10 titles) and row 2 would get the first 2 cells filled. Can my scenario be accomplished primarily in XAML, or should I resign myself to working in the code behind? Any guidance would greatly be appreciated. Thanks so much! <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:custom="http://schemas.microsoft.com/wpf/2008/toolkit" mc:Ignorable="d" x:Name="UserControl" d:DesignWidth="600" d:DesignHeight="400" > <UserControl.Resources> <XmlDataProvider x:Key="InventoryData" XPath="Inventory/Books"> <x:XData> <Inventory xmlns=""> <Books> <Book ISBN="0-7356-0562-9" Stock="in" Number="9"> <Title>XML in Action</Title> <Summary>XML Web Technology</Summary> </Book> <Book ISBN="0-7356-1370-2" Stock="in" Number="8"> <Title>Programming Microsoft Windows With C#</Title> <Summary>C# Programming using the .NET Framework</Summary> </Book> <Book ISBN="0-7356-1288-9" Stock="out" Number="7"> <Title>Inside C#</Title> <Summary>C# Language Programming</Summary> </Book> <Book ISBN="0-7356-1377-X" Stock="in" Number="5"> <Title>Introducing Microsoft .NET</Title> <Summary>Overview of .NET Technology</Summary> </Book> <Book ISBN="0-7356-1448-2" Stock="out" Number="4"> <Title>Microsoft C# Language Specifications</Title> <Summary>The C# language definition</Summary> </Book> </Books> <CDs> <CD Stock="in" Number="3"> <Title>Classical Collection</Title> <Summary>Classical Music</Summary> </CD> <CD Stock="out" Number="9"> <Title>Jazz Collection</Title> <Summary>Jazz Music</Summary> </CD> </CDs> </Inventory> </x:XData> </XmlDataProvider> <DataTemplate x:Key="GridCellTemplate"> <TextBlock> <TextBlock.Text> <Binding XPath="Title"/> </TextBlock.Text> </TextBlock> </DataTemplate> </UserControl.Resources> <Grid x:Name="LayoutRoot"> <custom:DataGrid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" IsSynchronizedWithCurrentItem="True" Background="{DynamicResource WindowBackgroundBrush}" HeadersVisibility="All" RowDetailsVisibilityMode="Collapsed" SelectionUnit="CellOrRowHeader" CanUserResizeRows="False" GridLinesVisibility="None" RowHeaderWidth="35" AutoGenerateColumns="False" CanUserReorderColumns="False" CanUserSortColumns="False"> <custom:DataGrid.Columns> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="01" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="02" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="03" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="04" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="05" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="06" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="07" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="08" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="09" /> <custom:DataGridTemplateColumn CellTemplate="{StaticResource GridCellTemplate}" Header="10" /> </custom:DataGrid.Columns> <custom:DataGrid.ItemsSource> <Binding Source="{StaticResource InventoryData}" XPath="Book"/> </custom:DataGrid.ItemsSource> </custom:DataGrid> </Grid>

    Read the article

  • YouTube API Security Error Flex

    - by 23tux
    Hi, I've tried to use the YoutTube API within a Flex project. But i got this error: *** Security Sandbox Violation *** SecurityDomain 'http://www.youtube.com/apiplayer?version=3' tried to access incompatible context 'file:///Users/YouTubePlayer/bin-debug/YouTubePlayer.html' Here are the two files: <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/halo" minWidth="1024" minHeight="768" xmlns:youtube="youtube.*" creationComplete="init();"> <fx:Script> <![CDATA[ [Bindable] private var ready:Boolean = true; private function init():void { Security.allowInsecureDomain("*"); Security.allowDomain("*"); Security.allowDomain('www.youtube.com'); Security.allowDomain('youtube.com'); Security.allowDomain('s.ytimg.com'); Security.allowDomain('i.ytimg.com'); } private function changing():void { /* trace("currentTime: " + player.getCurrentTime()); trace("startTime: " + player.startTime); trace("stopTime: " + player.stopTime); timeSlider.value = player.getCurrentTime() */ } private function startPlaying():void { player.play(); } private function checkStartSlider():void { if(startSlider.value > stopSlider.value) stopSlider.value = startSlider.value + 1; } private function checkStopSlider():void { if(stopSlider.value < startSlider.value) startSlider.value = stopSlider.value - 1; } ]]> </fx:Script> <s:VGroup> <youtube:Player id="player" videoID="DVFvcVuWyfE" change="changing();" ready="ready=true"/> <s:HGroup> <s:Button label="play" click="startPlaying();" /> </s:HGroup> <s:HGroup> <s:HSlider id="timeSlider" width="250" minimum="0" maximum="{player.stopTime}" snapInterval=".01" enabled="{ready}"/> <s:Label id="currentTimeLbl" text="current time: 0" /> </s:HGroup> <s:HGroup> <s:HSlider id="startSlider" width="250" minimum="0" maximum="{player.stopTime}" snapInterval=".01" change="checkStartSlider();" enabled="{ready}" value="0"/> <s:Label id="startTimeLbl" text="start time: {player.startTime}" /> </s:HGroup> <s:HGroup> <s:HSlider id="stopSlider" width="250" minimum="0" maximum="{player.stopTime}" snapInterval=".01" change="checkStopSlider();" enabled="{ready}" value="{player.stopTime}"/> <s:Label id="stopTimeLbl" text="stop time: {player.stopTime}" /> </s:HGroup> </s:VGroup> </s:Application> Here is the player package youtube { import flash.display.Loader; import flash.events.Event; import flash.events.TimerEvent; import flash.net.URLRequest; import flash.system.Security; import flash.utils.Timer; import mx.core.UIComponent; [Event(name="change", type="flash.events.Event")] [Event(name="ready", type="flash.events.Event")] public class Player extends UIComponent { private var player:Object; private var loader:Loader; private var _startTime:Number = 0; private var _stopTime:Number = 0; private var _videoID:String; private var metadataTimer:Timer = new Timer(200); private var playTimer:Timer = new Timer(200); public function Player() { // The player SWF file on www.youtube.com needs to communicate with your host // SWF file. Your code must call Security.allowDomain() to allow this // communication. Security.allowInsecureDomain("*"); Security.allowDomain("*"); // This will hold the API player instance once it is initialized. loader = new Loader(); loader.contentLoaderInfo.addEventListener(Event.INIT, onLoaderInit); loader.load(new URLRequest("http://www.youtube.com/apiplayer?version=3")); } private function onLoaderInit(event:Event):void { addChild(loader); loader.content.addEventListener("onReady", onPlayerReady); loader.content.addEventListener("onError", onPlayerError); loader.content.addEventListener("onStateChange", onPlayerStateChange); loader.content.addEventListener("onPlaybackQualityChange", onVideoPlaybackQualityChange); } private function onPlayerReady(event:Event):void { // Event.data contains the event parameter, which is the Player API ID trace("player ready:", Object(event).data); // Once this event has been dispatched by the player, we can use // cueVideoById, loadVideoById, cueVideoByUrl and loadVideoByUrl // to load a particular YouTube video. player = loader.content; // Set appropriate player dimensions for your application player.setSize(0, 0); } private function onPlayerError(event:Event):void { // Event.data contains the event parameter, which is the error code trace("player error:", Object(event).data); } private function onPlayerStateChange(event:Event):void { // Event.data contains the event parameter, which is the new player state trace("player state:", Object(event).data); } private function onVideoPlaybackQualityChange(event:Event):void { // Event.data contains the event parameter, which is the new video quality trace("video quality:", Object(event).data); } [Bindable] public function get videoID():String { return _videoID; } public function set videoID(value:String):void { _videoID = value; } [Bindable] public function get stopTime():Number { return _stopTime; } public function set stopTime(value:Number):void { _stopTime = value; } [Bindable] public function get startTime():Number { return _startTime; } public function set startTime(value:Number):void { _startTime = value; } public function play():void { if(_videoID!="") { player.loadVideoById(_videoID, 0); // add the event listener, so that all 200 milliseconds is an event dispatched metadataTimer.addEventListener(TimerEvent.TIMER, metadataTimeHandler); // if the timer is running, stop and reset it if(metadataTimer.running) metadataTimer.reset(); else metadataTimer.start(); } } private function metadataTimeHandler(e:TimerEvent):void { if(player.getDuration() > 0) { startTime = 0; stopTime = player.getDuration(); metadataTimer.reset(); metadataTimer.stop(); metadataTimer.removeEventListener(TimerEvent.TIMER, metadataTimeHandler); player.playVideo(); playTimer.addEventListener(TimerEvent.TIMER, playTimerHandler); dispatchEvent(new Event("ready")); } } private function playTimerHandler(e:TimerEvent):void { if(getCurrentTime() > _stopTime) { seekTo(startTime); } dispatchEvent(new Event(Event.CHANGE)); } public function getCurrentTime():Number { if(!player.getCurrentTime()) return 0; else return player.getCurrentTime(); } public function seekTo(time:uint):void { player.seekTo(time); } } } Hope someone can help. thx, tux

    Read the article

  • F# - Facebook Hacker Cup - Double Squares

    - by Jacob
    I'm working on strengthening my F#-fu and decided to tackle the Facebook Hacker Cup Double Squares problem. I'm having some problems with the run-time and was wondering if anyone could help me figure out why it is so much slower than my C# equivalent. There's a good description from another post; Source: Facebook Hacker Cup Qualification Round 2011 A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 3^2 + 1^2. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 3^2 + 1^2 (we don't count 1^2 + 3^2 as being different). On the other hand, 25 can be written as 5^2 + 0^2 or as 4^2 + 3^2. You need to solve this problem for 0 = X = 2,147,483,647. Examples: 10 = 1 25 = 2 3 = 0 0 = 1 1 = 1 My basic strategy (which I'm open to critique on) is to; Create a dictionary (for memoize) of the input numbers initialzed to 0 Get the largest number (LN) and pass it to count/memo function Get the LN square root as int Calculate squares for all numbers 0 to LN and store in dict Sum squares for non repeat combinations of numbers from 0 to LN If sum is in memo dict, add 1 to memo Finally, output the counts of the original numbers. Here is the F# code (See code changes at bottom) I've written that I believe corresponds to this strategy (Runtime: ~8:10); open System open System.Collections.Generic open System.IO /// Get a sequence of values let rec range min max = seq { for num in [min .. max] do yield num } /// Get a sequence starting from 0 and going to max let rec zeroRange max = range 0 max /// Find the maximum number in a list with a starting accumulator (acc) let rec maxNum acc = function | [] -> acc | p::tail when p > acc -> maxNum p tail | p::tail -> maxNum acc tail /// A helper for finding max that sets the accumulator to 0 let rec findMax nums = maxNum 0 nums /// Build a collection of combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) let rec combos range = seq { let count = ref 0 for inner in range do for outer in Seq.skip !count range do yield (inner, outer) count := !count + 1 } let rec squares nums = let dict = new Dictionary<int, int>() for s in nums do dict.[s] <- (s * s) dict /// Counts the number of possible double squares for a given number and keeps track of other counts that are provided in the memo dict. let rec countDoubleSquares (num: int) (memo: Dictionary<int, int>) = // The highest relevent square is the square root because it squared plus 0 squared is the top most possibility let maxSquare = System.Math.Sqrt((float)num) // Our relevant squares are 0 to the highest possible square; note the cast to int which shouldn't hurt. let relSquares = range 0 ((int)maxSquare) // calculate the squares up front; let calcSquares = squares relSquares // Build up our square combinations; ie [1,2,3] = (1,1), (1,2), (1,3), (2,2), (2,3), (3,3) for (sq1, sq2) in combos relSquares do let v = calcSquares.[sq1] + calcSquares.[sq2] // Memoize our relevant results if memo.ContainsKey(v) then memo.[v] <- memo.[v] + 1 // return our count for the num passed in memo.[num] // Read our numbers from file. //let lines = File.ReadAllLines("test2.txt") //let nums = [ for line in Seq.skip 1 lines -> Int32.Parse(line) ] // Optionally, read them from straight array let nums = [1740798996; 1257431873; 2147483643; 602519112; 858320077; 1048039120; 415485223; 874566596; 1022907856; 65; 421330820; 1041493518; 5; 1328649093; 1941554117; 4225; 2082925; 0; 1; 3] // Initialize our memoize dictionary let memo = new Dictionary<int, int>() for num in nums do memo.[num] <- 0 // Get the largest number in our set, all other numbers will be memoized along the way let maxN = findMax nums // Do the memoize let maxCount = countDoubleSquares maxN memo // Output our results. for num in nums do printfn "%i" memo.[num] // Have a little pause for when we debug let line = Console.Read() And here is my version in C# (Runtime: ~1:40: using System; using System.Collections.Generic; using System.Diagnostics; using System.IO; using System.Linq; using System.Text; namespace FBHack_DoubleSquares { public class TestInput { public int NumCases { get; set; } public List<int> Nums { get; set; } public TestInput() { Nums = new List<int>(); } public int MaxNum() { return Nums.Max(); } } class Program { static void Main(string[] args) { // Read input from file. //TestInput input = ReadTestInput("live.txt"); // As example, load straight. TestInput input = new TestInput { NumCases = 20, Nums = new List<int> { 1740798996, 1257431873, 2147483643, 602519112, 858320077, 1048039120, 415485223, 874566596, 1022907856, 65, 421330820, 1041493518, 5, 1328649093, 1941554117, 4225, 2082925, 0, 1, 3, } }; var maxNum = input.MaxNum(); Dictionary<int, int> memo = new Dictionary<int, int>(); foreach (var num in input.Nums) { if (!memo.ContainsKey(num)) memo.Add(num, 0); } DoMemoize(maxNum, memo); StringBuilder sb = new StringBuilder(); foreach (var num in input.Nums) { //Console.WriteLine(memo[num]); sb.AppendLine(memo[num].ToString()); } Console.Write(sb.ToString()); var blah = Console.Read(); //File.WriteAllText("out.txt", sb.ToString()); } private static int DoMemoize(int num, Dictionary<int, int> memo) { var highSquare = (int)Math.Floor(Math.Sqrt(num)); var squares = CreateSquareLookup(highSquare); var relSquares = squares.Keys.ToList(); Debug.WriteLine("Starting - " + num.ToString()); Debug.WriteLine("RelSquares.Count = {0}", relSquares.Count); int sum = 0; var index = 0; foreach (var square in relSquares) { foreach (var inner in relSquares.Skip(index)) { sum = squares[square] + squares[inner]; if (memo.ContainsKey(sum)) memo[sum]++; } index++; } if (memo.ContainsKey(num)) return memo[num]; return 0; } private static TestInput ReadTestInput(string fileName) { var lines = File.ReadAllLines(fileName); var input = new TestInput(); input.NumCases = int.Parse(lines[0]); foreach (var lin in lines.Skip(1)) { input.Nums.Add(int.Parse(lin)); } return input; } public static Dictionary<int, int> CreateSquareLookup(int maxNum) { var dict = new Dictionary<int, int>(); int square; foreach (var num in Enumerable.Range(0, maxNum)) { square = num * num; dict[num] = square; } return dict; } } } Thanks for taking a look. UPDATE Changing the combos function slightly will result in a pretty big performance boost (from 8 min to 3:45): /// Old and Busted... let rec combosOld range = seq { let rangeCache = Seq.cache range let count = ref 0 for inner in rangeCache do for outer in Seq.skip !count rangeCache do yield (inner, outer) count := !count + 1 } /// The New Hotness... let rec combos maxNum = seq { for i in 0..maxNum do for j in i..maxNum do yield i,j }

    Read the article

  • Having trouble returning a value from a method call when sending an array and the program is error out when run in reference to the sort

    - by programmerNOOB
    I am getting the following output when this program is run: Please enter the Social Security Number for taxpayer 0: 111111111 Please enter the gross income for taxpayer 0: 20000 Please enter the Social Security Number for taxpayer 1: 555555555 Please enter the gross income for taxpayer 1: 50000 Please enter the Social Security Number for taxpayer 2: 333333333 Please enter the gross income for taxpayer 2: 5464166 Please enter the Social Security Number for taxpayer 3: 222222222 Please enter the gross income for taxpayer 3: 645641 Please enter the Social Security Number for taxpayer 4: 444444444 Please enter the gross income for taxpayer 4: 29000 Taxpayer # 1 SSN: 111111111, Income is $20,000.00, Tax is $0.00 Taxpayer # 2 SSN: 555555555, Income is $50,000.00, Tax is $0.00 Taxpayer # 3 SSN: 333333333, Income is $5,464,166.00, Tax is $0.00 Taxpayer # 4 SSN: 222222222, Income is $645,641.00, Tax is $0.00 Taxpayer # 5 SSN: 444444444, Income is $29,000.00, Tax is $0.00 Unhandled Exception: System.InvalidOperationException: Failed to compare two elements in the array. --- System.ArgumentException: At least one object must implement IComparable. at System.Collections.Comparer.Compare(Object a, Object b) at System.Collections.Generic.ObjectComparer`1.Compare(T x, T y) at System.Collections.Generic.ArraySortHelper`1.SwapIfGreaterWithItems(T[] keys, IComparer`1 comparer, Int32 a, Int32 b) at System.Collections.Generic.ArraySortHelper`1.QuickSort(T[] keys, Int32 left, Int32 right, IComparer`1 comparer) at System.Collections.Generic.ArraySortHelper`1.Sort(T[] keys, Int32 index, Int32 length, IComparer`1 comparer) --- End of inner exception stack trace --- at System.Collections.Generic.ArraySortHelper`1.Sort(T[] keys, Int32 index, Int32 length, IComparer`1 comparer) at System.Array.Sort[T](T[] array, Int32 index, Int32 length, IComparer`1 comparer) at System.Array.Sort[T](T[] array) at Assignment5.Taxpayer.Main(String[] args) in Program.cs:line 150 Notice the 0s at the end of the line that should be the tax amount??? Here is the code: using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace taxes { class Rates { // Create a class named rates that has the following data members: private int incLimit; private double lowTaxRate; private double highTaxRate; // use read-only accessor public int IncomeLimit { get { return incLimit; } } public double LowTaxRate { get { return lowTaxRate; } } public double HighTaxRate { get { return highTaxRate; } } //A class constructor that assigns default values public Rates() { int limit = 30000; double lowRate = .15; double highRate = .28; incLimit = limit; lowTaxRate = lowRate; highTaxRate = highRate; } //A class constructor that takes three parameters to assign input values for limit, low rate and high rate. public Rates(int limit, double lowRate, double highRate) { } // A CalculateTax method that takes an income parameter and computes the tax as follows: public int CalculateTax(int income) { int limit = 0; double lowRate = 0; double highRate = 0; int taxOwed = 0; // If income is less than the limit then return the tax as income times low rate. if (income < limit) taxOwed = Convert.ToInt32(income * lowRate); // If income is greater than or equal to the limit then return the tax as income times high rate. if (income >= limit) taxOwed = Convert.ToInt32(income * highRate); return taxOwed; } } //end class Rates // Create a class named Taxpayer that has the following data members: public class Taxpayer { //Use get and set accessors. string SSN { get; set; } int grossIncome { get; set; } // Use read-only accessor. public int taxOwed { get { return taxOwed; } } // The Taxpayer class should be set up so that its objects are comparable to each other based on tax owed. class taxpayer : IComparable { public int taxOwed { get; set; } public int income { get; set; } int IComparable.CompareTo(Object o) { int returnVal; taxpayer temp = (taxpayer)o; if (this.taxOwed > temp.taxOwed) returnVal = 1; else if (this.taxOwed < temp.taxOwed) returnVal = -1; else returnVal = 0; return returnVal; } // End IComparable.CompareTo } //end taxpayer IComparable class // **The tax should be calculated whenever the income is set. // The Taxpayer class should have a getRates class method that has the following. public static void GetRates() { // Local method data members for income limit, low rate and high rate. int incLimit = 0; double lowRate; double highRate; string userInput; // Prompt the user to enter a selection for either default settings or user input of settings. Console.Write("Would you like the default values (D) or would you like to enter the values (E)?: "); /* If the user selects default the default values you will instantiate a rates object using the default constructor * and set the Taxpayer class data member for tax equal to the value returned from calling the rates object CalculateTax method.*/ userInput = Convert.ToString(Console.ReadLine()); if (userInput == "D" || userInput == "d") { Rates rates = new Rates(); rates.CalculateTax(incLimit); } // end if /* If the user selects to enter the rates data then prompt the user to enter values for income limit, low rate and high rate, * instantiate a rates object using the three-argument constructor passing those three entries as the constructor arguments and * set the Taxpayer class data member for tax equal to the valuereturned from calling the rates object CalculateTax method. */ if (userInput == "E" || userInput == "e") { Console.Write("Please enter the income limit: "); incLimit = Convert.ToInt32(Console.ReadLine()); Console.Write("Please enter the low rate: "); lowRate = Convert.ToDouble(Console.ReadLine()); Console.Write("Please enter the high rate: "); highRate = Convert.ToDouble(Console.ReadLine()); Rates rates = new Rates(incLimit, lowRate, highRate); rates.CalculateTax(incLimit); } } static void Main(string[] args) { Taxpayer[] taxArray = new Taxpayer[5]; Rates taxRates = new Rates(); // Implement a for-loop that will prompt the user to enter the Social Security Number and gross income. for (int x = 0; x < taxArray.Length; ++x) { taxArray[x] = new Taxpayer(); Console.Write("Please enter the Social Security Number for taxpayer {0}: ", x + 1); taxArray[x].SSN = Console.ReadLine(); Console.Write("Please enter the gross income for taxpayer {0}: ", x + 1); taxArray[x].grossIncome = Convert.ToInt32(Console.ReadLine()); } Taxpayer.GetRates(); // Implement a for-loop that will display each object as formatted taxpayer SSN, income and calculated tax. for (int i = 0; i < taxArray.Length; ++i) { Console.WriteLine("Taxpayer # {0} SSN: {1}, Income is {2:c}, Tax is {3:c}", i + 1, taxArray[i].SSN, taxArray[i].grossIncome, taxRates.CalculateTax(taxArray[i].grossIncome)); } // end for // Implement a for-loop that will sort the five objects in order by the amount of tax owed Array.Sort(taxArray); Console.WriteLine("Sorted by tax owed"); for (int i = 0; i < taxArray.Length; ++i) { Console.WriteLine("Taxpayer # {0} SSN: {1}, Income is {2:c}, Tax is {3:c}", i + 1, taxArray[i].SSN, taxArray[i].grossIncome, taxRates.CalculateTax(taxArray[i].grossIncome)); } } //end main } // end Taxpayer class } //end Any clues as to why the dollar amount is coming up as 0 and why the sort is not working?

    Read the article

< Previous Page | 157 158 159 160 161 162 163 164 165 166 167 168  | Next Page >