Search Results

Search found 3403 results on 137 pages for 'monthly statements'.

Page 100/137 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Which programming language to choose? (for a specific problem/domain, details inside)

    - by Bijan
    I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. a) Java b) C++ c) C# d) Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds.... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: - real time production - weekly simulations of a large number of systems - weekly/monthly optimizations of portfolios - large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.

    Read the article

  • C++ problem with string stream istringstream

    - by user69514
    I am reading a file in the following format 1001 16000 300 12.50 2002 24000 360 10.50 3003 30000 300 9.50 where the items are: loan id, principal, months, interest rate. I'm not sure what it is that I am doing wrong with my input string stream, but I am not reading the values correctly because only the loan id is read correctly. Everything else is zero. Sorry this is a homework, but I just wanted to know if you could help me identify my error. if( inputstream.is_open() ){ /** print the results **/ cout << fixed << showpoint << setprecision(2); cout << "ID " << "\tPrincipal" << "\tDuration" << "\tInterest" << "\tPayment" <<"\tTotal Payment" << endl; cout << "---------------------------------------------------------------------------------------------" << endl; /** assign line read while we haven't reached end of file **/ string line; istringstream instream; while( inputstream >> line ){ instream.clear(); instream.str(line); /** assing values **/ instream >> loanid >> principal >> duration >> interest; /** compute monthly payment **/ double ratem = interest / 1200.0; double expm = (1.0 + ratem); payment = (ratem * pow(expm, duration) * principal) / (pow(expm, duration) - 1.0); /** computer total payment **/ totalPayment = payment * duration; /** print out calculations **/ cout << loanid << "\t$" << principal <<"\t" << duration << "mo" << "\t" << interest << "\t$" << payment << "\t$" << totalPayment << endl; } }

    Read the article

  • Line graph disappears after new line is added

    - by tonystinge
    I am having trouble uploading a large .csv file to my HIGHCHART graph. I've been able to graph an 85KB up to 1486 lines by 9 (I) columns. Here is an example: TimeStamp Temp_1_01 Temp_1_02 Temp_2_01 Temp_2_02 Temp_3_01 Temp_3_02 Temp_4_01 Temp_4_02 5/15/2014, 3:25 408 487 63 84 67 91 63 78 5/15/2014 3:30 408 487 63 84 67 91 63 78 5/15/2014 3:35 407 489 63 84 67 91 63 78 5/15/2014 3:40 408 488 63 84 67 91 63 78 5/15/2014 3:44 408 488 63 84 67 91 63 78 ... 5/22/2014 9:40 483 421 0 93 76 95 72 89 When I add a new line the line graph disappears. Any suggestions? Here is the javascript: $.get('Dropbox/geo/sites/GC_Room/loveland.csv', function(data) { // Split the lines var lines = data.split('\n'); var i = 0; var csvData = []; // Iterate over the lines and add categories or series $.each(lines, function(lineNo, line) { csvData[i] = line.split(','); i = i + 1; }); var columns = csvData[0]; var categories = [], series = []; for(var colIndex=0,len=columns.length; colIndex<len; colIndex++) { //first row data as series's name var seriesItem= { data:[], name:csvData[0][colIndex] }; for(var rowIndex=1,rowCnt=csvData.length; rowIndex<rowCnt; rowIndex++) { //first column data as categories, if (colIndex == 0) { categories.push(csvData[rowIndex][0]); } else if(parseFloat(csvData[rowIndex][colIndex])) // <-- here { seriesItem.data.push(parseFloat(csvData[rowIndex][colIndex])); } }; //except first column if(colIndex>0)series.push(seriesItem); } // Create the chart var chart = new Highcharts.Chart( { chart: { alignTick: false, renderTo: 'LANE_METALS', type: 'line' }, title: { text: 'Monthly Average Temperature', x: -20 //center }, subtitle: { text: 'Source: LANE METALS', x: -20 }, xAxis: { categories: categories, labels: { step: 200, text: 'Time', }, tickWidth: 0 }, yAxis: { title: { text: 'Temperature (\xB0C)' }, min: 0 }, tooltip: { formatter: function() { return '<b>'+ this.series.name +'</b><br/>'+ this.x +': '+ this.y +'\xB0C'; } }, legend: { layout: 'vertical', //backgroundColor: '#FFFFFF', //floating: true, align: 'left', //x: 100, verticalAlign: 'top', //y: 70, borderWidth: 0 }, plotOptions: { area: { turboThreshold: 0, stacking: 'normal', lineColor: '#666666', lineWidth: 1, marker: { lineWidth: 1, lineColor: '#666666' } } }, series: series }); });

    Read the article

  • PHP send batch email [closed]

    - by qalbiol
    Possible Duplicate: Sending mass email using PHP I have a PHP script that sends an individual email to all users in my DB, such as a monthly / weekly newsletter. The code I am using goes as follows: $subject = $_POST['subject']; $message = $_POST['message']; // Get all the mailing list subscribers. $query = $db->prepare("SELECT * FROM maildb"); $query->execute(); // Loop through all susbcribers, and send and individual email. foreach ($query as $row) { // Setting maximum time limit to infinite. set_time_limit(0); $newMessage = '<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> </head> <body>'; // Search for the [unsubscribe] tag and replace it with a URL for the user to unsubscribe $newMessage .= str_replace("[unsubscribe]", "<a href='".BASE_URL."unsubscribe/".$row['hash']."/".$row['email']."'>unsubscribe</a>", $message); $newMessage .= '</body></html>'; $to = $row['email']; // Establish content headers $headers = "From: [email protected]"."\n"; $headers .= "Reply-To: [email protected]"."\n"; $headers .= "X-Mailer: PHP v.". phpversion()."\n"; $headers .= "MIME-Version: 1.0"."\n"; $headers .= "Content-Type: text/html; charset=iso-8859-1"."\n"; $headers .= "Content-Transfer-Encoding: 8bit;"; mail($to, $subject, $newMessage, $headers); // Send email to each individual user } This code works perfectly with a REALLY small database... I recently populated my test db with 200k+ users, and obviously this script fails, gets out of memory, and dies... I know this is a bad way to send so many emails, thats why I'd like to ask you for much more efficient ways to do this! Thank you very much!

    Read the article

  • Which programming idiom to choose for this open source library?

    - by Walkman
    I have an interesting question about which programming idiom is easier to use for beginner developers writing concrete file parsing classes. I'm developing an open source library, which one of the main functionality is to parse plain text files and get structured information from them. All of the files contains the same kind of information, but can be in different formats like XML, plain text (each of them is structured differently), etc. There are a common set of information pieces which is the same in all (e.g. player names, table names, some id numbers) There are formats which are very similar to each other, so it's possible to define a common Base class for them to facilitate concrete format parser implementations. So I can clearly define base classes like SplittablePlainTextFormat, XMLFormat, SeparateSummaryFormat, etc. Each of them hints the kind of structure they aim to parse. All of the concrete classes should have the same information pieces, no matter what. To be useful at all, this library needs to define at least 30-40 of these parsers. A couple of them are more important than others (obviously the more popular formats). Now my question is, which is the best programming idiom to choose to facilitate the development of these concrete classes? Let me explain: I think imperative programming is easy to follow even for beginners, because the flow is fixed, the statements just come one after another. Right now, I have this: class SplittableBaseFormat: def parse(self): "Parses the body of the hand history, but first parse header if not yet parsed." if not self.header_parsed: self.parse_header() self._parse_table() self._parse_players() self._parse_button() self._parse_hero() self._parse_preflop() self._parse_street('flop') self._parse_street('turn') self._parse_street('river') self._parse_showdown() self._parse_pot() self._parse_board() self._parse_winners() self._parse_extra() self.parsed = True So the concrete parser need to define these methods in order in any way they want. Easy to follow, but takes longer to implement each individual concrete parser. So what about declarative? In this case Base classes (like SplittableFormat and XMLFormat) would do the heavy lifting based on regex and line/node number declarations in the concrete class, and concrete classes have no code at all, just line numbers and regexes, maybe other kind of rules. Like this: class SplittableFormat: def parse_table(): "Parses TABLE_REGEX and get information" # set attributes here def parse_players(): "parses PLAYER_REGEX and get information" # set attributes here class SpecificFormat1(SplittableFormat): TABLE_REGEX = re.compile('^(?P<table_name>.*) other info \d* etc') TABLE_LINE = 1 PLAYER_REGEX = re.compile('^Player \d: (?P<player_name>.*) has (.*) in chips.') PLAYER_LINE = 16 class SpecificFormat2(SplittableFormat): TABLE_REGEX = re.compile(r'^Tournament #(\d*) (?P<table_name>.*) other info2 \d* etc') TABLE_LINE = 2 PLAYER_REGEX = re.compile(r'^Seat \d: (?P<player_name>.*) has a stack of (\d*)') PLAYER_LINE = 14 So if I want to make it possible for non-developers to write these classes the way to go seems to be the declarative way, however, I'm almost certain I can't eliminate the declarations of regexes, which clearly needs (senior :D) programmers, so should I care about this at all? Do you think it matters to choose one over another or doesn't matter at all? Maybe if somebody wants to work on this project, they will, if not, no matter which idiom I choose. Can I "convert" non-programmers to help developing these? What are your observations? Other considerations: Imperative will allow any kind of work; there is a simple flow, which they can follow but inside that, they can do whatever they want. It would be harder to force a common interface with imperative because of this arbitrary implementations. Declarative will be much more rigid, which is a bad thing, because formats might change over time without any notice. Declarative will be harder for me to develop and takes longer time. Imperative is already ready to release. I hope a nice discussion will happen in this thread about programming idioms regarding which to use when, which is better for open source projects with different scenarios, which is better for wide range of developer skills. TL; DR: Parsing different file formats (plain text, XML) They contains same kind of information Target audience: non-developers, beginners Regex probably cannot be avoided 30-40 concrete parser classes needed Facilitate coding these concrete classes Which idiom is better?

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Surface Review from Canadian Guy Who Didn&rsquo;t Go To Build

    - by D'Arcy Lussier
    I didn’t go to Build last week, opted to stay home and go trick-or-treating with my daughters instead. I had many friends that did go however, and I was able to catch up with James Chambers last night to hear about the conference and play with his Surface RT and Nokia 920 WP8 devices. I’ve been using Windows 8 for a while now, so I’m not going to comment on OS features – lots of posts out there on that already. Let me instead comment on the hardware itself. Size and Weight The size of the tablet was awesome. The Windows 8 tablet I’m using to reference this against is the one from Build 2011 (Samsung model) we received as well as my iPad. The Surface RT was taller and slightly heavier than the iPad, but smaller and lighter than the Samsung Win 8 tablet. I still don’t prefer the default wide-screen format, but the Surface RT is much more usable even when holding it by the long edge than the Samsung. Build Quality No issues with the build quality, it seemed very solid. But…y’know, people have been going on about how the Surface RT materials are so much better than the plastic feeling models Samsung and others put out. I didn’t really notice *that* much difference in that regard with the Surface RT. Interesting feature I didn’t expect – the Windows button on the device is touch-sensitive, not a mechanical one. I didn’t try video or anything, so I can’t comment on the media experience. The kickstand is a great feature, and the way the Surface RT connects to the combo case/keyboard touchcover is very slick while being incredibly simple. What About That Touch Cover Keyboard? So first, kudos to Microsoft on the touch cover! This thing was insanely responsive (including the trackpad) and really delivered on the thinness I was expecting. With that said, and remember this is with very limited use, I would probably go with the Type Cover instead of the Touch Cover. The difference is buttons. The Touch Cover doesn’t actually have “buttons” on the keyboard – hence why its a “touch” cover. You tap on a key to type it. James tells me after a while you get used to it and you can type very fast. For me, I just prefer the tactile feeling of a button being pressed/depressed. But still – typing on the touch case worked very well. Would I Buy One? So after playing with it, did I cry out in envy and rage that I wasn’t able to get one of these machines? Did I curse my decision to collect Halloween candy with my kids instead of being at Build getting hardware? Well – no. Even with the keyboard, the Surface RT is not a business laptop replacement device. While Office does come included, you can’t install any other applications outside of Windows Store Apps. This might be limiting depending on what other applications you need to have available on your computer. Surface RT is a great personal computing device, as long as you’re not already invested in a competing ecosystem. I’ve heard people make statements that they’re going to replace all the iPads in their homes with Surface tablets. In my home, that’s not feasible – my wife and daughters have amassed quite a collection of games via iTunes. We also buy all our music via iTunes as well, so even with the XBox streaming music service now available we’re still tied quite tightly to iTunes. So who is the Surface RT for? In my mind, if you’re looking for a solid, compact device that provides basic business functionality (read: email) or if you have someone that needs a very simple to use computer for email, web browsing, etc., then Surface RT is a great option. For me, I’m waiting on the Samsung Ativ Smart PC Pro and am curious to see what changes the Surface Pro will come with.

    Read the article

  • “Query cost (relative to the batch)” <> Query cost relative to batch

    - by Dave Ballantyne
    OK, so that is quite a contradictory title, but unfortunately it is true that a common misconception is that the query with the highest percentage relative to batch is the worst performing.  Simply put, it is a lie, or more accurately we dont understand what these figures mean. Consider the two below simple queries: SELECT * FROM Person.BusinessEntity JOIN Person.BusinessEntityAddress ON Person.BusinessEntity.BusinessEntityID = Person.BusinessEntityAddress.BusinessEntityID go SELECT * FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID After executing these and looking at the plans, I see this : So, a 13% / 87% split ,  but 13% / 87% of WHAT ? CPU ? Duration ? Reads ? Writes ? or some magical weighted algorithm ?  In a Profiler trace of the two we can find the metrics we are interested in. CPU and duration are well out but what about reads (210 and 1935)? To save you doing the maths, though you are more than welcome to, that’s a 90.2% / 9.8% split.  Close, but no cigar. Lets try a different tact.  Looking at the execution plan the “Estimated Subtree cost” of query 1 is 0.29449 and query 2 its 1.96596.  Again to save you the maths that works out to 13.03% and 86.97%, round those and thats the figures we are after.  But, what is the worrying word there ? “Estimated”.  So these are not “actual”  execution costs,  but what’s the problem in comparing the estimated costs to derive a meaning of “Most Costly”.  Well, in the case of simple queries such as the above , probably not a lot.  In more complicated queries , a fair bit. By modifying the second query to also show the total number of lines on each order SELECT *,COUNT(*) OVER (PARTITION BY Sales.SalesOrderDetail.SalesOrderID) FROM Sales.SalesOrderDetail JOIN Sales.SalesOrderHeader ON Sales.SalesOrderDetail.SalesOrderID = Sales.SalesOrderHeader.SalesOrderID The split in percentages is now 6% / 94% and the profiler metrics are : Even more of a discrepancy. Estimates can be out with actuals for a whole host of reasons,  scalar UDF’s are a particular bug bear of mine and in-fact the cost of a udf call is entirely hidden inside the execution plan.  It always estimates to 0 (well, a very small number). Take for instance the following udf Create Function dbo.udfSumSalesForCustomer(@CustomerId integer) returns money as begin Declare @Sum money Select @Sum= SUM(SalesOrderHeader.TotalDue) from Sales.SalesOrderHeader where CustomerID = @CustomerId return @Sum end If we have two statements , one that fires the udf and another that doesn't: Select CustomerID from Sales.Customer order by CustomerID go Select CustomerID,dbo.udfSumSalesForCustomer(Customer.CustomerID) from Sales.Customer order by CustomerID The costs relative to batch is a 50/50 split, but the has to be an actual cost of firing the udf. Indeed profiler shows us : No where even remotely near 50/50!!!! Moving forward to window framing functionality in SQL Server 2012 the optimizer sees ROWS and RANGE ( see here for their functional differences) as the same ‘cost’ too SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid RANGE unbounded preceding) from Sales.SalesOrderdetail go SELECT SalesOrderDetailID,SalesOrderId, SUM(LineTotal) OVER(PARTITION BY salesorderid ORDER BY Salesorderdetailid Rows unbounded preceding) from Sales.SalesOrderdetail By now it wont be a great display to show you the Profiler trace reads a *tiny* bit different. So moral of the story, Percentage relative to batch can give a rough ‘finger in the air’ measurement, but dont rely on it as fact.

    Read the article

  • Tool to convert blogger.com content to dasBlog

    - by Daniel Moth
    Due to blogger.com dropping FTP support, I've had to move my blog. If you are in a similar situation, this post will help you by showing you the necessary steps to take. Goals No loss on blog posts, comments AND all existing permalinks continue to work (redirect to the correct place). Steps Download the XML files corresponding to your blogger.com content and store them in a folder. Install and configure dasBlog on your local machine. Configure your web.config file (will need updating once you run step 4). Use the tool I describe further down to generate the content and place it at the right place. Test your site locally. Once you are happy, repeat step 2 on your hosting provider of choice. Remember to copy up your dasBlog theme folder if you created one. Copy up the local web.config file and the XML dasBlog content files generated by the tool of step 4. Test your site on the server. Once you are happy, go live (following instructions from your hoster). In my case, I gave the nameservers from my new hoster to my existing domain registrar and they made the switch. Tool (code) At step 4 above I referred to a tool. That is an overstatement, it is simply one 450-line C#code file that you can download here: BloggerToDasBlog.cs. I used this from a .NET 2.0 console app (and I run it under the Visual Studio debugger, i.e. F5) like this: Program.cs. The console app referenced the dasBlog 2.3 ASP.NET Blogging Engine i.e. the newtelligence.DasBlog.Runtime.dll assembly. Let me describe what the code does: Input: A path to a folder where the XML files from the old blogger.com blog reside. It can deal with both types of XML file. A full file path to a file where it creates XML redirect input (as required by the rewriteMap mentioned here). The blog URL. The author's email. The blog author name. A path to an empty folder where the new XML dasBlog content files will get created. The subfolder name used after the domain name in the URL. The 3 reg ex patterns to use. You can use the same as mine, but will need to tweak the monthly_archive rule. Again, to see what values I passed for all the above, see my Program.cs file. Output: It creates dasBlog XML files in the folder specified. It creates those by parsing the old blogger.com XML files that reside in the folder specified. After that is generated, copy it to the "Content" folder under your dasBlog installation. It creates an XML file with a single ignorable root element and a bunch of inner XML elements. You can copy paste these in the web.config file as discussed in this post. Other notes: For each blog post, it detects outgoing links to itself (i.e. to the same blog), and rewrites those to point to the new URLs. So internal links do not rely on the web.config redirects. It deals with duplicate post titles; it does not deal with triplicates and higher. Removes all references to blogger.com (e.g. references to [email protected], the injected hidden footer for statistics that each blog post has and others – see the code). It creates a lot of diagnostic output (in the Output window) and indeed the documentation for the code is in the Debug.WriteLine statements ;) This is not code I will maintain or support – it was a throwaway one-use project that I am sharing here as a starting point for anyone finding themselves in the same boat that I was. Enjoy "as is". Comments about this post welcome at the original blog.

    Read the article

  • Great Customer Service Example

    - by MightyZot
    A few days ago I wrote about what I consider a poor customer service interaction with TiVo, a company that I have been faithful to for the past 12 years or so. In that post I talked about how they helped me, but I felt like I was doing something wrong at the end of the call – when in reality I was just following through with an offer that TiVo made possible through my cable company. Today I had a wonderful customer service interaction with American Express, another company that I have been loyal to for many years.(I am a Gold Card member.) I like my Amex card because I can use it for big purchases and it forces me to pay them off at the end of the month. Well, the reality is that I’m not always so good at doing that, so sometimes my payments are over a couple of months.  :) A few days ago I received an email from “American Express” fraud detection. The email stated that I should call a toll free number and have the last four digits of my card handy. I grew up during the BBS era with some creative and somewhat mischievous friends. I’ve learned to be extremely cautious with regard to my online life! So, I did what you would expect…I sent them a nice reply that said “Go screw yourself.” For the past couple of days someone has been trying to call me and I assumed it was the same prankster trying to get the last four digits of my card. The last caller left a message indicating that they were from American Express and they wanted to talk to me about my card. After looking up their customer service numbers on the www.americanexpress.com web site, I called and was put through to the fraud detection group. The rep explained that there were some charges on my wife’s card that did not fit our purchase profile. She went through each charge and, for the most part, they looked like charges my wife may have made. My wife had asked to use the card for some Christmas shopping during the same timeframe as the charges. The American Express rep very politely explained that these looked out of character to her. She continued through the charges. She listed a charge for $160 – at this point my adrenaline started kicking in. My wife said she was going to charge about $25 or $30 dollars, not $160. Next, the rep listed a charge for over $1200. Uh oh!! Now I know that my account has been compromised. I informed the rep that we definitely did not make those charges. She replied with, “that’s ok Mr Pope, we declined those charges as well as some others.” We went through the pending charges and there were a couple more that were questionable. The rep very patiently waited while I called my wife on my office phone to verify the charges. Sure enough, my wife had not ordered anything from Netflix or purchased anything with Yahoo Wallet! “No problem Mr Pope, we will remove those charges as well.” “We are going to cancel your wife’s card and send her a new one. She will receive it by 7pm tomorrow via Federal Express. Please watch your statements over the next couple of months. If you notice anything fishy, give us a call and we will take care of it for you.” (Wow, I’m thinking to myself!) “Is there anything else I can help you with Mr Pope?” “Nope, thank you very much for catching this so early and declining those charges!”, I said smiling. Apparently she could hear me smiling on the other end of the phone line because she replied with “keep smiling Mr Pope and have a good rest of your week.” Now THAT’s customer service!  Thank you American Express!!! I shall remain an ever faithful customer. Interesting…

    Read the article

  • What’s the use of code reuse?

    - by Tony Davis
    All great developers write reusable code, don’t they? Well, maybe, but as with all statements regarding what “great” developers do or don’t do, it’s probably an over-simplification. A novice programmer, in particular, will encounter in the literature a general assumption of the importance of code reusability. They spend time worrying about DRY (don’t repeat yourself), moving logic into specific “helper” modules that they can then reuse, agonizing about the minutiae of the class structure, inheritance and interface design that will promote easy reuse. Unfortunately, writing code specifically for reuse often leads to complicated object hierarchies and inheritance models that are anything but reusable. If, instead, one strives to write simple code units that are highly maintainable and perform a single function, in a concise, isolated fashion then the potential for reuse simply “drops out” as a natural by-product. Programmers, of course, care about these principles, about encapsulation and clean interfaces that don’t expose inner workings and allow easy pluggability. This is great when it helps with the maintenance and development of code but how often, in practice, do we actually reuse our code? Most DBAs and database developers are familiar with the practical reasons for the limited opportunities to reuse database code and its potential downsides. However, surely elsewhere in our code base, reuse happens often. After all, we can all name examples, such as date/time handling modules, which if we write with enough care we can plug in to many places. I spoke to a developer just yesterday who looked me in the eye and told me that in 30+ years as a developer (a successful one, I’d add), he’d never once reused his own code. As I sat blinking in disbelief, he explained that, of course, he always thought he would reuse it. He’d often agonized over its design, certain that he was creating code of great significance that he and other generations would reuse, with grateful tears misting their eyes. In fact, it never happened. He had in his head, most of the algorithms he needed and would simply write the code from scratch each time, refining the algorithms and tailoring the code to meet the specific requirements. It was, he said, simply quicker to do that than dig out the old code, check it, correct the mistakes, and adapt it. Is this a common experience, or just a strange anomaly? Viewed in a certain light, building code with a focus on reusability seems to hark to a past age where people built cars and music systems with the idea that someone else could and would replace and reuse the parts. Technology advances so rapidly that the next time you need the “same” code, it’s likely a new technique, or a whole new language, has emerged in the meantime, better equipped to tackle the task. Maybe we should be less fearful of the idea that we could write code well suited to the system requirements, but with little regard for reuse potential, and then rewrite a better version from scratch the next time.

    Read the article

  • Who should ‘own’ the Enterprise Architecture?

    - by Michael Glas
    I recently had a discussion around who should own an organization’s Enterprise Architecture. It was spawned by an article titled “Busting CIO Myths” in CIO magazine1 where the author interviewed Jeanne Ross, director of MIT's Center for Information Systems Research and co-author of books on enterprise architecture, governance and IT value.In the article Jeanne states that companies need to acknowledge that "architecture says everything about how the company is going to function, operate, and grow; the only person who can own that is the CEO". "If the CEO doesn't accept that role, there really can be no architecture."The first question that came up when talking about ownership was whether you are talking about a person, role, or organization (there are pros and cons to each, but in general, I like to assign accountability to as few people as possible). After much thought and discussion, I came to the conclusion that we were answering the wrong question. Instead of talking about ownership we were talking about responsibility and accountability, and the answer varies depending on the particular role of the organization’s Enterprise Architecture and the activities of the enterprise architect(s).Instead of looking at just who owns the architecture, think about what the person/role/organization should do. This is one possible scenario (thanks to Bob Covington): The CEO should own the Enterprise Strategy which guides the business architecture. The Business units should own the business processes and information which guide the business, application and information architectures. The CIO should own the technology, IT Governance and the management of the application and information architectures/implementations. The EA Governance Team owns the EA process.  If EA is done well, the governance team consists of both IT and the business. While there are many more roles and responsibilities than listed here, it starts to provide a clearer understanding of ‘ownership’. Now back to Jeanne’s statement that the CEO should own the architecture. If you agree with the statement about what the architecture is (and I do agree), then ultimately the CEO does need to own it. However, what we ended up with was not really ownership, but more statements around roles and responsibilities tied to aspects of the enterprise architecture. You can debate the semantics of ownership vs. responsibility and accountability, but in the end the important thing is to come to a clearer understanding that is easily communicated (and hopefully measured) around the question “Who owns the Enterprise Architecture”.The next logical step . . . create a RACI matrix that details the findings . . . but that is a step that each organization needs to do on their own as it will vary based on current EA maturity, company culture, and a variety of other factors. Who ‘owns’ the Enterprise Architecture in your organization? 1 CIO Magazine Article (Busting CIO Myths): http://www.cio.com/article/704943/Busting_CIO_Myths Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Session Update from IASA 2010

    - by [email protected]
    Below: Tom Kristensen, senior vice president at Marsh US Consumer, and Roger Soppe, CLU, LUTCF, senior director of insurance strategy, Oracle Insurance. Tom and Roger participated in a panel discussion on policy administration systems this week at IASA 2010. This week was the 82nd Annual IASA Educational Conference & Business Show held in Grapevine, Texas. While attending the conference, I had the pleasure of serving as a panelist in one of many of the outstanding sessions conducted this year. The session - entitled "Achieving Business Agility and Promoting Growth with a Modern Policy Administration System" - included industry experts Steve Forte from OneShield, Mike Sciole of IFG Companies, and Tom Kristensen, senior vice president at Marsh US Consumer. The session was conducted as a panel discussion and focused on how insurers can leverage best practices to mitigate risk while enabling rapid product innovation through a modern policy administration system. The panelists offered insight into business and technical challenges for both Life & Annuity and Property & Casualty carriers. The session had three primary learning objectives: Identifying how replacing a legacy system with a more modern policy administration solution can deliver agility and growth Identifying how processes and system should be re-engineered or replaced in order to improve speed-to-market and product support Uncovering how to leverage best practices to mitigate risk during a migration to a new platform Tom Kristensen, who is an industry veteran with over 20 years of experience, was able was able to offer a unique perspective as a business process outsourcer (BPO). Marsh US Consumer is currently implementing both the Oracle Insurance Policy Administration solution and the Oracle Revenue Management and Billing platform while at the same time implementing a new BPO customer. Tom offered insight on the need to replace their aging systems and Marsh's ability to drive new products and processes with a modern solution. As a best practice, their current project has empowered their business users to play a major role in both the requirements gathering and configuration phases. Tom stated that working with a modern solution has also enabled his organization to use a more agile implementation methodology and get hands-on experience with the software earlier in the project. He also indicated that Marsh was encouraged by how quickly it will be able to implement new products, which is another major advantage of a modern rules-based system. One of the more interesting issues was raised by an audience member who asked, "With all the vendor solutions available in North American and across Europe, what is going to make some of them more successful than others and help ensure their long term success?" Panelist Mike Sciole, IFG Companies suggested that carriers do their due diligence and follow a structured evaluation process focusing on vendors who demonstrate they have the "cash to invest in long term R&D" and evaluate audited annual statements for verification. Other panelists suggested that the vendor space will continue to evolve and those with a strong strategy focused on the insurance industry and a solid roadmap will likely separate themselves from the rest. The session concluded with the panelists offering advice about not being afraid to evaluate new modern systems. While migrating to a new platform can be challenging and is typically only undertaken every 15+ years by carriers, the ability to rapidly deploy and manage new products, create consistent processes to better service customers, and the ability to manage their business more effectively, transparently and securely are well worth the effort. Roger A.Soppe, CLU, LUTCF, is the Senior Director of Insurance Strategy, Oracle Insurance.

    Read the article

  • Exception Handling Frequency/Log Detail

    - by Cyborgx37
    I am working on a fairly complex .NET application that interacts with another application. Many single-line statements are possible culprits for throwing an Exception and there is often nothing I can do to check the state before executing them to prevent these Exceptions. The question is, based on best practices and seasoned experience, how frequently should I lace my code with try/catch blocks? I've listed three examples below, but I'm open to any advice. I'm really hoping to get some pros/cons of various approaches. I can certainly come up with some of my own (greater log granularity for the O-C approach, better performance for the Monolithic approach), so I'm looking for experience over opinion. EDIT: I should add that this application is a batch program. The only "recovery" necessary in most cases is to log the error, clean up gracefully, and quit. So this could be seen to be as much a question of log granularity as exception handling. In my mind's eye I can imagine good reasons for both, so I'm looking for some general advice to help me find an appropriate balance. Monolitich Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } catch (Exception e) { Log(e); } finally { CleanUp(); } } public static void Step1(){ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } public static void Step2(){ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } public static void Step3(){ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } } Delegated Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Obsessive-Compulsive Approach class Program{ public static void Main(){ try{ Step1(); Step2(); Step3(); } finally { CleanUp(); } } public static void Step1(){ try{ ExternalApp.Dangerous1(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous2(); } catch (Exception e) { Log(e); throw; } } public static void Step2(){ try{ ExternalApp.Dangerous3(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous4(); } catch (Exception e) { Log(e); throw; } } public static void Step3(){ try{ ExternalApp.Dangerous5(); } catch (Exception e) { Log(e); throw; } try{ ExternalApp.Dangerous6(); } catch (Exception e) { Log(e); throw; } } } Other approaches welcomed and encouraged. Above are examples only.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • PHP Web Services - Nice try

    Thanks to the membership in the O'Reilly User Group Programme the Mauritius Software Craftsmanship Community (short: MSCC) recently received a welcome package with several book titles. Among them is the latest publication of Lorna Jane Mitchell - 'PHP Web Services: APIs for the Modern Web'. Following is the book review I put on Amazon: Nice try! Initially, I was astonished that a small book like 'PHP Web Services' would be able to cover all the interesting topics about APIs and Web Services, independently whether they are written in PHP or not. And unfortunately, the title isn't able to stand up to the readers (or at least my) expectations. Maybe as a light defense, there is no usual paragraph about the intended audience of that book, but still I have to admit that the first half (chapters 1 to 8) are well written and Lorna has her points on the various technologies. Also, the code samples in PHP are clean and easy to understand. With chapter 'Debugging Web Services' the book started to change my mind about the clarity of advice and the instructions on designing and developing good APIs. Eventually, this might be related to the fact that I'm used to other tools since years, like Telerik Fiddler as HTTP proxy in order to trace and inspect any kind of request/response handling. Including localhost monitoring, SSL certification acceptance, and the ability to debug mobile devices, especially iOS-based ones. Compared to Charles, Fiddler is available for free. What really got me off the hook is the following statement in chapter 10 about Service Type Decisions: "For users who have larger systems using technology stacks such as Java, C++, or .NET, it may be easier for them to integrate with a SOAP service." WHAT? A couple of pages earlier the author recommends to stay away from 'old-fashioned' API styles like SOAP (if possible). And on top of that I wonder why there are tons of documentation towards development of RESTful Web Services based on WebAPI. The ASP.NET stack clearly moves away from SOAP to JSON and REST since years! Honestly, as a software developer on the .NET stack this leaves a mixed feeling after all. As for the remaining chapters I simply consider them as 'blah blah' without any real value and lots of theoretical advice. Related to the chapter 13 about 'Documentation', I just had the 'pleasure' to write a C#-based client against a Java-based SOAP Web Service. Personally, I take the WSDL as the master reference in the first place and Visual Studio generates all the stub types involved in the communication. During the implementation and testing I came across a 'java.lang.NullPointerException' in various methods and for various method parameters. The WSDL and the generated types were declared as Nullable, so nothing to worry about, or? Well, I logged in a support ticket, and guess what was the response to that scenario? "The service definition in the WSDL is wrong, please refer to the documentation in order to use the methods and parameters correctly" - No comment! Lorna's title is a quick read and in some areas she has good advice on designing and implementing Web Services and APIs. But roughly 100 pages aren't enough to cover a vast topic like that. After all, nice try and I'm looking forward to an improved second edition. Honestly, I never thought that I would come across a poor review. In general, it's a good book but it clearly has a lack of depth, the PHP code samples are incomplete (closing tags missing), and there are too many assumptions and theoretical statements.

    Read the article

  • Does my use of the strategy pattern violate the fundamental MVC pattern in iOS?

    - by Goodsquirrel
    I'm about to use the 'strategy' pattern in my iOS app, but feel like my approach violates the somehow fundamental MVC pattern. My app is displaying visual "stories", and a Story consists (i.e. has @properties) of one Photo and one or more VisualEvent objects to represent e.g. animated circles or moving arrows on the photo. Each VisualEvent object therefore has a eventType @property, that might be e.g. kEventTypeCircle or kEventTypeArrow. All events have things in common, like a startTime @property, but differ in the way they are being drawn on the StoryPlayerView. Currently I'm trying to follow the MVC pattern and have a StoryPlayer object (my controller) that knows about both the model objects (like Story and all kinds of visual events) and the view object StoryPlayerView. To chose the right drawing code for each of the different visual event types, my StoryPlayer is using a switch statement. @implementation StoryPlayer // (...) - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { switch (event.eventType) { case kEventTypeCircle: [self showCircleEvent:event onStoryPlayerView:storyPlayerView]; break; case kEventTypeArrow: [self showArrowDrawingEvent:event onStoryPlayerView:storyPlayerView]; break; // (...) } But switch statements for type checking are bad design, aren't they? According to Uncle Bob they lead to tight coupling and can and should almost always be replaced by polymorphism. Having read about the "Strategy"-Pattern in Head First Design Patterns, I felt this was a great way to get rid of my switch statement. So I changed the design like this: All specialized visual event types are now subclasses of an abstract VisualEvent class that has a showOnStoryPlayerView: method. @interface VisualEvent : NSObject - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView; // abstract Each and every concrete subclass implements a concrete specialized version of this drawing behavior method. @implementation CircleVisualEvent - (void)showOnStoryPlayerView:(StoryPlayerView *)storyPlayerView { [storyPlayerView drawCircleAtPoint:self.position color:self.color lineWidth:self.lineWidth radius:self.radius]; } The StoryPlayer now simply calls the same method on all types of events. @implementation StoryPlayer - (void)showVisualEvent:(VisualEvent *)event onStoryPlayerView:storyPlayerView { [event showOnStoryPlayerView:storyPlayerView]; } The result seems to be great: I got rid of the switch statement, and if I ever have to add new types of VisualEvents in the future, I simply create new subclasses of VisualEvent. And I won't have to change anything in StoryPlayer. But of cause this approach violates the MVC pattern since now my model has to know about and depend on my view! Now my controller talks to my model and my model talks to the view calling methods on StoryPlayerView like drawCircleAtPoint:color:lineWidth:radius:. But this kind of calls should be controller code not model code, right?? Seems to me like I made things worse. I'm confused! Am I completely missing the point of the strategy pattern? Is there a better way to get rid of the switch statement without breaking model-view separation?

    Read the article

  • My Doors - Why Standards Matter to Business

    - by Brian Dayton
    "Standards save money." "Standards accelerate projects." "Standards make better solutions."   What do these statements mean to you? You buy technology solutions like Oracle Applications but you're a business person--trying to close the quarter, get performance reviews processed, negotiate a new sourcing contract, etc.   When "standards" come up in presentations and discussions do you: -          Nod your head politely -          Tune out and check your smart phone -          Turn to your IT counterpart and say "Bob's all over this standards thing, right Bob?"   Here's why standards matter. My wife wants new external doors downstairs, ones that would get more light into the rooms. Am I OK with that? "Uhh, sure...it's a little dark in the kitchen."   -          24 hours ago - wife calls to tell me that she's going to the hardware store and may look at doors -          20 hours ago - wife pulls into driveway, informs me that two doors are in the back of her station wagon, ready for me to carry -          19 hours ago - I re-discovered the fact that it's not fun to carry a solid wood door by myself -          5 hours ago - Local handyman, who was at our house anyway, tells me that the doors we bought will likely cost 2-3x the material cost in installation time and labor...the doors are standard but our doorways aren't   We could have done more research. I could be more handy. Sure. But the fact is, my 1951 house wasn't built with me in mind. They built what worked and called it a day.   The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different. If you're going to use your processes and technology to differentiate your business you should have at least a working knowledge of: -          How standards can benefit your business -          Your IT organization's philosophy around standards -          Your vendor's track-record around standards...and watch for those who pay lip-service to standards but don't follow through   The rallying cry in most IT organizations today is "learn more about the business, drop the acronyms." I'm not advocating that you go out and learn how to code in Java. But I do believe it will help your business and your decision-making process if you meet IT ½...even ¼ of the way there.   Epilogue: The door project has been put on hold and yours truly has to return the doors to the hardware store tomorrow.

    Read the article

  • Silverlight Cream for February 07, 2011 -- #1043

    - by Dave Campbell
    In this Issue: Roy Dallal, Kevin Dockx, Gill Cleeren, Oren Gal, Colin Eberhardt, Rudi Grobler, Jesse Liberty, Shawn Wildermuth, Kirupa Chinnathambi, Jeremy Likness, Martin Krüger(-2-), Beth Massi, and Michael Crump. Above the Fold: Silverlight: "A Circular ProgressBar Style using an Attached ViewModel" Colin Eberhardt WP7: "Isolated Storage" Jesse Liberty Lightswitch: "How To Create Outlook Appointments from a LightSwitch Application" Beth Massi Shoutouts: Gergely Orosz has a summary of his 4-part series on Styles in Silverlight: Everything a Developer Needs To Know From SilverlightCream.com: Silverlight Memory Leak, Part 2 Roy Dallal has part 2 of his memory leak posts up... and discusses the results of runnin VMMap and some hints on how to make best use of it. Using a Channel Factory in Silverlight (instead of adding a Service Reference). With cows. Kevin Dockx has a post up for those of you that don't like the generated code that comes about when adding a service reference, and the answer is a Channel Factory... and he has an example app in the post that populates a list of cows... honest ... check it out. Getting ready for Microsoft Silverlight Exam 70-506 (Part 4) Gill Cleeren has Part 4 of his deep-dive into studying for the Silverlight Certification exam. This time out he's got probably half a gazillion links for working with data... seriously! Sync unlimited instances of one Silverlight application How about a cross-browser sync of an unlimited number of instances of the same Silverlight app... Oren Gal has just that going on, and discusses his first two attempts and how he finally honed in on the solution. A Circular ProgressBar Style using an Attached ViewModel Wow... check out what Colin Eberhardt's done with the "Progress Bar" ... using an Attached View Model which he discussed in a post a while back... these are awesome! WP7 - Professional Audio Recorder Rudi Grobler discusses an audio recorder for WP7 that uses the NAudio audio library for not only the recording but visualization. Isolated Storage Jesse Liberty's got his 30th 'From Scratch' post up and this time he's talking about Isolated Storage. Learning OData? MSDN and Shawn Wildermuth has the videos for you! Shawn Wildermuth produced a couple series of videos for MSDN on OData: Getting Started and Consuming OData... get the link on Shawn's post. Creating Sample Data from a Class - Page 1 Kirupa Chinnathambi shows us how to use a schema of your own design in Blend... yet still have Blend produce sample data A Pivot-Style Data Grid without the DataGrid Jeremy Likness discusses the lack of an open-source grid with dynamic columns ... let him know if you've done one! ... and then he continues on to demonstrate his build-out of the same. Synchronize a freeform drawing and a real path creation Martin Krüger has a few new samples up in the Expression Gallery. This first is taking mouse movement in an InkPresenter and creating path statements from it in a canvas and playing them back. How to: use Storyboard completed behaviors Martin Krüger's next post is about Storyboards and firing one off the end of another, in Blend... so he ended up producing a behavior for doing that... and it's in the Expression Gallery How To Create Outlook Appointments from a LightSwitch Application Beth Massi has a new Lightswitch post up... her previous was email from Lightswitch... this is Outlook appointments... pretty darn cool. Quick run through of the WP7 Developer Tools January 2011 Michael Crump has a really good Quick look at the new WP7 Dev Tools that were released last week posted on his blog Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Why SQL Developer Rocks for the Advanced User Too

    - by thatjeffsmith
    While SQL Developer may be ‘perfect for Oracle beginners,’ that doesn’t preclude advanced and intermediate users from getting their fair share of toys! I’ve been working with Oracle since the 7.3.4 days, and I think it’s pretty safe to say that the WAY an ‘old timer’ uses a tool like SQL Developer is radically different than the ‘beginner.’ If you’ve been reluctant to use SQL Developer because it’s a GUI, give me a few minutes to try to convince you it’s worth a second (or third) look. 1. Help when you want it, and only when you want it One of the biggest gripes any user has with a piece of software is when said software can’t get out of it’s own way. When you’re typing in a word processor, sometimes you can do without the grammar and spelling checks, the offer to auto-complete your words, and all of the additional mark-up. This drives folks to programs like Notepad++ and vi. You can disable the code insight feature so you can type unmolested by SQL Developer’s attempt to auto-complete your object names. Now, if you happen to come across a long or hard to spell object name, you can still invoke the feature on demand using Ctrl+Spacebar Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) 2. Automatic File Tracking SQL*Minus is nice. Vi is cool. Notepad++ has a lot of features I like. But not too many editors offer automatic logging of changes to your files without having to setup a source control system. I was doing some work on my login.sql. I’m not doing anything crazy, but seeing what I had done in previous iterations was helpful. Now imagine how nice it would be to have this available for your l,000+ line scripts! Track your scripts as they change, no setup required! 3. Extend the Functionality Know SQL and XML? Wish SQL Developer did JUST a little bit more? Build your own extensions. You can have custom context menus and object pages in just a few minutes. This is an example of lazy developers writing code that write code. 4. Get Your Money’s Worth You’ve licensed Enterprise Edition. You got your Diagnostic and Tuning packs. Now start using them! Not everyone has access to Enterprise Manager, especially developers. But that doesn’t mean they don’t need help with troubleshooting and optimizing poorly performing SQL statements. ASH, AWR, Real-Time SQL Monitoring and the SQL Tuning Advisor are built into the Reports and Worksheet. Yes you could make the package calls, but that’s a whole lot of typing, and I’d rather just get to the results. 5. Profile, Debug, & Unit Testing PLSQL An Interactive Development Environment (IDE) built by the same folks that own the programming language (Hello – Oracle PLSQL!) should be complete. It should ‘hug’ the developer and empower them to churn out programs that work, run fast, and are easy to maintain. Write it, test it, debug it, and tune it. When you’re running your programs and you just want to see the data that’s returned, that shouldn’t require any special settings or workaround to make it happen either. Magic! And a whole lot more… I could go on and talk about the support for things like DataPump, RMAN, and DBMS_SCHEDULER, but you’re experts and you’re plenty busy. If you think SQL Developer is falling short somewhere, I want you to let us know about it.

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • AdventureWorks2012 now available for all on SQL Azure

    - by jamiet
    Three days ago I tweeted this: Idea. MSFT could host read-only copies of all the [AdventureWorks] DBs up on #sqlazure for the SQL community to use. RT if agree #sqlfamily — Jamie Thomson (@jamiet) March 24, 2012 Evidently I wasn't the only one that thought this was a good idea because as you can see from the screenshot that tweet has, so far, been retweeted more than fifty times. Clearly there is a desire to see the AdventureWorks databases made available for the community to noodle around on so I am pleased to announce that as of today you can do just that - [AdventureWorks2012] now resides on SQL Azure and is available for anyone, absolutely anyone, to connect to and use* for their own means. *By use I mean "issue some SELECT statements". You don't have permission to issue INSERTs, UPDATEs, DELETEs or EXECUTEs I'm afraid - if you want to do that then you can get the bits and host it yourself. This database is free for you to use but SQL Azure is of course not free so before I give you the credentials please lend me your ears eyes for a short while longer. AdventureWorks on Azure is being provided for the SQL Server community to use and so I am hoping that that same community will rally around to support this effort by making a voluntary donation to support the upkeep which, going on current pricing, is going to be $119.88 per year. If you would like to contribute to keep AdventureWorks on Azure up and running for that full year please donate via PayPal to [email protected]: Any amount, no matter how small, will help. If those 50+ people that retweeted me beforehand all contributed $2 then that would just about be enough to keep this up for a year. If the community contributes more that we need then there are a number of additional things that could be done: Host additional databases (Northwind anyone??) Host in more datacentres (this first one is in Western Europe) Make a charitable donation That last one, a charitable donation, is something I would really like to do. The SQL Community have proved before that they can make a significant contribution to charitable orgnisations through purchasing the SQL Server MVP Deep Dives book and I harbour hopes that AdventureWorks on Azure can continue in that vein. So please, if you think AdventureWorks on Azure is something that is worth supporting please make a contribution. OK, with the prickly subject of begging for cash out of the way let me share the details that you need to connect to [AdventureWorks2012] on SQL Azure: Server mhknbn2kdz.database.windows.net  Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly That user sqlfamily has all the permissions required to enable you to query away to your heart's content. Here is the code that I used to set it up: CREATE USER sqlfamily FOR LOGIN sqlfamily;CREATE ROLE sqlfamilyrole;EXEC sp_addrolemember 'sqlfamilyrole','sqlfamily';GRANT VIEW DEFINITION ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT VIEW DATABASE STATE ON Database::AdventureWorks2012 TO sqlfamilyrole;GRANT SHOWPLAN TO sqlfamilyrole;EXEC sp_addrolemember 'db_datareader','sqlfamilyrole'; You can connect to the database using SQL Server Management Studio (instructions to do that are provided at Walkthrough: Connecting to SQL Azure via the SSMS) or you can use the web interface at https://mhknbn2kdz.database.windows.net: Lastly, just for a bit of fun I created a table up there called [dbo].[SqlFamily] into which you can leave a small calling card. Simply execute the following SQL statement (changing the values of course): INSERT [dbo].[SqlFamily]([Name],[Message],[TwitterHandle],[BlogURI])VALUES ('Your name here','Some Message','your twitter handle (optional)','Blog URI (optional)'); [Id] is an IDENTITY field and there is a default constraint on [DT] hence there is no need to supply a value for those. Note that you only have INSERT permissions, not UPDATE or DELETE so make sure you get it right first time! Any offensive or distasteful remarks will of course be deleted :) Thank you for reading this far and have fun using AdventureWorks on Azure. I hope it proves to be useful for some of you. @jamiet AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • Do we still have a case against the goto statement? [closed]

    - by FredOverflow
    Possible Duplicate: Is it ever worthwhile using goto? In a recent article, Andrew Koenig writes: When asked why goto statements are harmful, most programmers will say something like "because they make programs hard to understand." Press harder, and you may well hear something like "I don't really know, but that's what I was taught." For that reason, I'd like to summarize Dijkstra's arguments. He then shows two program fragments, one without a goto and and one with a goto: if (n < 0) n = 0; Assuming that n is a variable of a built-in numeric type, we know that after this code, n is nonnegative. Suppose we rewrite this fragment: if (n >= 0) goto nonneg; n = 0; nonneg: ; In theory, this rewrite should have the same effect as the original. However, rewriting has changed something important: It has opened the possibility of transferring control to nonneg from anywhere else in the program. I emphasized the part that I don't agree with. Modern languages like C++ do not allow goto to transfer control arbitrarily. Here are two examples: You cannot jump to a label that is defined in a different function. You cannot jump over a variable initialization. Now consider composing your code of tiny functions that adhere to the single responsibility principle: int clamp_to_zero(int n) { if (n >= 0) goto n_is_not_negative: n = 0; n_is_not_negative: return n; } The classic argument against the goto statement is that control could have transferred from anywhere inside your program to the label n_is_not_negative, but this simply is not (and was never) true in C++. If you try it, you will get a compiler error, because labels are scoped. The rest of the program doesn't even see the name n_is_not_negative, so it's just not possible to jump there. This is a static guarantee! Now, I'm not saying that this version is better then the one without the goto, but to make the latter as expressive as the first one, we would at least have to insert a comment, or even better yet, an assertion: int clamp_to_zero(int n) { if (n < 0) n = 0; // n is not negative at this point assert(n >= 0); return n; } Note that you basically get the assertion for free in the goto version, because the condition n >= 0 is already written in line 1, and n = 0; satisfies the condition trivially. But that's just a random observation. It seems to me that "don't use gotos!" is one of those dogmas like "don't use multiple returns!" that stem from a time where the real problem were functions of hundreds or even thousand of lines of code. So, do we still have a case against the goto statement, other than that it is not particularly useful? I haven't written a goto in at least a decade, but it's not like I was running away in terror whenever I encountered one. 1 Ideally, I would like to see a strong and valid argument against gotos that still holds when you adhere to established programming principles for clean code like the SRP. "You can jump anywhere" is not (and has never been) a valid argument in C++, and somehow I don't like teaching stuff that is not true. 1: Also, I have never been able to resurrect even a single velociraptor, no matter how many gotos I tried :(

    Read the article

  • SQLAuthority News – Random Thoughts and Random Ideas

    - by pinaldave
    There are days when I keep on wondering about SQL, and even my life overall. Today is Saturday so I decided to write about SQL Server. Just like any other mornings, I woke up at 5 and opened my blog editor. I usually do not open Twitter or Facebook when I am planning to focus on my work, as they are little distractions for me. But today I opened my Twitter account and came across a very interesting quote from a friend: ‘Can I expect you to be different today?’ Well, I think it was very powerful quote for me to read first thing on a new day. This quote froze me for a while and made me think, “Do I really want to write about an SQL Server tip, or something different?”  After a little thinking, I’ve realized that for today I would go on and write something different. I am going to write about a few of the ideas and thoughts I had yesterday. After writing all these, I realized that if I am thinking so much in a day, and if I write a blog post of my random musing of the week or month, it can be so long (and boring). Here are some of my random thoughts I’d like to share with you: When the airplane lands, why does everybody get up and try to rush out when their luggage would be coming probably 20-30 minutes later? I really do not like this question when it was asked to me: “SQL Server is not using optimal index which I just created – how can I force it?” I am not going elaborate on this statement but you are allowed to in the comment section. Why do some people wish Good Morning even when they meet us after 4 PM? Can I optimize a query so much that it gives me a result before I execute it? Is it corruption when someone does their personal household work at office? The lane where I drive is always the slowest lane. Why waste time on correcting others when there are a lot of pending improvements for ourselves? If I have to get Tattoo, which SQL Server Execution Plan symbol should I get? Why do I reach office so early that the coffee machine is yet running its daily cleaning job? Why does every laptop have a ‘Page Up’ key at different locations on the keyboard? While I like color movies, I really appreciate black and white photographs. I do not appreciate statements like, “If I receive your books in PDF, I will spread it to many people to give you much greater exposure. So would you please send them to me ASAP?” Do not tell me, “Why does the database grow back after shrinking it every day?” I suggest you use “Search this blog” for the explanation. Petrol prices are currently at INR 74. I hope the rate remains there. Let me ask you the same question which started my day today:  “Can I expect you to be different today?” Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Some Original Expressions

    - by Phil Factor
    Guest Editorial for Simple-Talk newsletterIn a guest editorial for the Simple-Talk Newsletter, Phil Factor wonders if we are still likely to find some more novel and unexpected ways of using the newer features of Transact SQL: or maybe in some features that have always been there! There can be a great deal of fun to be had in trying out recent features of SQL Expressions to see if  they provide new functionality.  It is surprisingly rare to find things that couldn’t be done before, but in a different   and more cumbersome way; but it is great to experiment or to read of someone else making that discovery.  One such recent feature is the ‘table value constructor’, or ‘VALUES constructor’, that managed to get into SQL Server 2008 from Standard SQL.  This allows you to create derived tables of up to 1000 rows neatly within select statements that consist of  lists of row values.  E.g. SELECT Old_Welsh, number FROM (VALUES ('Un',1),('Dou',2),('Tri',3),('Petuar',4),('Pimp',5),('Chwech',6),('Seith',7),('Wyth',8),('Nau',9),('Dec',10)) AS WelshWordsToTen (Old_Welsh, number) These values can be expressions that return single values, including, surprisingly, subqueries. You can use this device to create views, or in the USING clause of a MERGE statement. Joe Celko covered  this here and here.  It can become extraordinarily handy to use once one gets into the way of thinking in these terms, and I’ve rewritten a lot of routines to use the constructor, but the old way of using UNION can be used the same way, but is a little slower and more long-winded. The use of scalar SQL subqueries as an expression in a VALUES constructor, and then applied to a MERGE, has got me thinking. It looks very clever, but what use could one put it to? I haven’t seen anything yet that couldn’t be done almost as  simply in SQL Server 2000, but I’m hopeful that someone will come up with a way of solving a tricky problem, just in the same way that a freak of the XML syntax forever made the in-line  production of delimited lists from an expression easy, or that a weird XML pirouette could do an elegant  pivot-table rotation. It is in this sort of experimentation where the community of users can make a real contribution. The dissemination of techniques such as the Number, or Tally table, or the unconventional ways that the UPDATE statement can be used, has been rapid due to articles and blogs. However, there is plenty to be done to explore some of the less obvious features of Transact SQL. Even some of the features introduced into SQL Server 2000 are hardly well-known. Certain operations on data are still awkward to perform in Transact SQL, but we mustn’t, I think, be too ready to state that certain things can only be done in the application layer, or using a CLR routine. With the vast array of features in the product, and with the tools that surround it, I feel that there is generally a way of getting tricky things done. Or should we just stick to our lasts and push anything difficult out into procedural code? I’d love to know your views.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >