Search Results

Search found 211 results on 9 pages for 'professor frink'.

Page 9/9 | < Previous Page | 5 6 7 8 9 

  • Constructors taking references in C++

    - by sasquatch
    I'm trying to create constructor taking reference to an object. After creating object using reference I need to prints field values of both objects. Then I must delete first object, and once again show values of fields of both objects. My class Person looks like this : class Person { char* name; int age; public: Person(){ int size=0; cout << "Give length of char*" << endl; cin >> size; name = new char[size]; age = 0; } ~Person(){ cout << "Destroying resources" << endl; delete[] name; delete age; } void init(char* n, int a) { name = n; age = a; } }; Here's my implementation (with the use of function show() ). My professor said that if this task is written correctly it will return an error. #include <iostream> using namespace std; class Person { char* name; int age; public: Person(){ int size=0; cout << "Give length of char*" << endl; cin >> size; name = new char[size]; age = 0; } Person(const Person& p){ name = p.name; age = p.age; } ~Person(){ cout << "Destroying resources" << endl; delete[] name; delete age; } void init(char* n, int a) { name = n; age = a; } void show(char* n, int a){ cout << "Name: " << name << "," << "age: " << age << "," << endl; } }; int main(void) { Person *p = new Person; p->init("Mary", 25); p->show(); Person &p = pRef; pRef->name = "Tom"; pRef->age = 18; Person *p2 = new Person(pRef); p->show(); p2->show(); system("PAUSE"); return 0; }

    Read the article

  • Doubly linked lists

    - by user1642677
    I have an assignment that I am terribly lost on involving doubly linked lists (note, we are supposed to create it from scratch, not using built-in API's). The program is supposed to keep track of credit cards basically. My professor wants us to use doubly-linked lists to accomplish this. The problem is, the book does not go into detail on the subject (doesn't even show pseudo code involving doubly linked lists), it merely describes what a doubly linked list is and then talks with pictures and no code in a small paragraph. But anyway, I'm done complaining. I understand perfectly well how to create a node class and how it works. The problem is how do I use the nodes to create the list? Here is what I have so far. public class CardInfo { private String name; private String cardVendor; private String dateOpened; private double lastBalance; private int accountStatus; private final int MAX_NAME_LENGTH = 25; private final int MAX_VENDOR_LENGTH = 15; CardInfo() { } CardInfo(String n, String v, String d, double b, int s) { setName(n); setCardVendor(v); setDateOpened(d); setLastBalance(b); setAccountStatus(s); } public String getName() { return name; } public String getCardVendor() { return cardVendor; } public String getDateOpened() { return dateOpened; } public double getLastBalance() { return lastBalance; } public int getAccountStatus() { return accountStatus; } public void setName(String n) { if (n.length() > MAX_NAME_LENGTH) throw new IllegalArgumentException("Too Many Characters"); else name = n; } public void setCardVendor(String v) { if (v.length() > MAX_VENDOR_LENGTH) throw new IllegalArgumentException("Too Many Characters"); else cardVendor = v; } public void setDateOpened(String d) { dateOpened = d; } public void setLastBalance(double b) { lastBalance = b; } public void setAccountStatus(int s) { accountStatus = s; } public String toString() { return String.format("%-25s %-15s $%-s %-s %-s", name, cardVendor, lastBalance, dateOpened, accountStatus); } } public class CardInfoNode { CardInfo thisCard; CardInfoNode next; CardInfoNode prev; CardInfoNode() { } public void setCardInfo(CardInfo info) { thisCard.setName(info.getName()); thisCard.setCardVendor(info.getCardVendor()); thisCard.setLastBalance(info.getLastBalance()); thisCard.setDateOpened(info.getDateOpened()); thisCard.setAccountStatus(info.getAccountStatus()); } public CardInfo getInfo() { return thisCard; } public void setNext(CardInfoNode node) { next = node; } public void setPrev(CardInfoNode node) { prev = node; } public CardInfoNode getNext() { return next; } public CardInfoNode getPrev() { return prev; } } public class CardList { CardInfoNode head; CardInfoNode current; CardInfoNode tail; CardList() { head = current = tail = null; } public void insertCardInfo(CardInfo info) { if(head == null) { head = new CardInfoNode(); head.setCardInfo(info); head.setNext(tail); tail.setPrev(node) // here lies the problem. tail must be set to something // to make it doubly-linked. but tail is null since it's // and end point of the list. } } } Here is the assignment itself if it helps to clarify what is required and more importantly, the parts I'm not understanding. Thanks https://docs.google.com/open?id=0B3vVwsO0eQRaQlRSZG95eXlPcVE

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • top tweets WebLogic Partner Community – September 2012

    - by JuergenKress
    Send your tweets @wlscommunity #WebLogicCommunity and follow us at http://twitter.com/wlscommunity Oracle Exalogic? VIDEO: Oracle Public Cloud Built on Exalogic!, http://www.youtube.com/watch?v=UGzjDloUw_s&feature=plcp … oracleopenworld #NetBeans Community Day at #JavaOne http://ow.ly/dunFL Oracle Cloud Zone Building an enterprise Cloud? Have Oracle show you the RIGHT way to plan, deploy and monitor enterprise clouds.... http://fb.me/286978S4S OTNArchBeat? Oracle Exalogic X2-2 walk-through with Brad Cameron | @jvzoggel http://pub.vitrue.com/yE7d Oracle Technet? Stash your cash. September OTN Member Offers - discounts on books, more | OTN Blog http://pub.vitrue.com/yTr9 C2B2 Consulting? C2B2 is Speaking at @UKOUG App Server Middleware SIG Meeting 'Real Life #WebLogic Performance Tuning' http://www.c2b2.co.uk/ukoug_application_server_middleware_sig_meeting … @wlscommunity JAXenter.com? From yesterday, @smeyen offers his views on the next generation #Java - do you agree? http://jaxenter.com/next-gen-java-we-don-t-need-another-revolutionary-44334.html … Markus Eisele? Awesome: professor from ITU uploads her programming lectures to #YouTube. Programming classes without having to pay! http://bit.ly/UtkJIW Adam Bien? Real World Java EE 6 Patterns--Rethinking Best Practices Reloaded: A completely rewritten, second, iteration of ... http://bit.ly/Qc3xTH Markus Eisele [blog] #PrimeFaces Push with #Atmosphere on #GlassFish 3.1.2.2 http://goo.gl/fb/jPDzA Lucas Jellema? Forms community event at Oracle Open World - Tuesday, 2nd Oct with the BIG names in Forms - see: http://oracleformsinfo.wordpress.com/2012/08/28/ask-the-product-manager-join-us-at-the-oracle-forms-community-event-at-openworld-2012/ … WebLogic Community WebLogic & Coherence & Cloud presentations for customer meetings http://wp.me/p1LMIb-kw Adam Bien? New Book: Rethinking Best Practices with Java EE 6 is out: http://realworldpatterns.com (fully rewritten, re-edited and reformatted) WebLogic Community? Want to become and WebLogic 12c expert? free WebLogic 12c partner bootcamps &ndash;new locations: Madrid Spain http://wp.me/p1LMIb-kK WebLogic Community? Promote Your WebLogic events at http://oracle.com http://wp.me/p1LMIb-ku OracleBlogs Gartner review Oracle ADF http://ow.ly/1mgkCV Simon Haslam Next #ukoug App Server & MW SIG on 10 Oct: http://www.ukoug.org/events/ukoug-application-server-and-middleware-sig-meeting8/ … Hopefully plenty of good admin stuff! Michel Schildmeijer My book "WebLogic 12c; First look" has been reviewed again..see http://www.amazon.com/review/R28L6E3CC9RPMK/ref=cm_cr_pr_perm?ie=UTF8&ASIN=1849687188&linkCode=&nodeID=&tag= … … Markus Eisele? #Weblogic 11g Interactive Quick Reference Map: http://bit.ly/Ugsq52 #wls #oracle #reference /via @TonyvanEsch Marc? Playing with #syslog server and #weblogic. Is there a simple how-to to configure all the logging from #WLS to #syslog-ng WebLogic Community Java update http://wp.me/p1LMIb-kI WebLogic Community top tweets WebLogic Partner Community &ndash; August 2012 http://wp.me/p1LMIb-kA Andrejus Baranovskis? Oracle University Training: ADF/WebCenter 11g Development in Depth | Andrejus Baranovskis http://fb.me/253ZTS2zp OracleSupport_WLS? How neat is a free tool that allows you to inspect and debug traffic from virtually any application? http://pub.vitrue.com/vXdP WebLogic Community WebLogic Partner Community Newsletter August 2012 http://wp.me/p1LMIb-kn OTNArchBeat Integrating Coherence & Java EE 6 Applications using ActiveCache | Ricardo Ferreira http://pub.vitrue.com/rwGg Adam Bien? Thanks for attending the #javaee #techtalk "Enterprise Java 2.0" I pushed the project and slides to: http://kenai.com/projects/javaee-patterns/sources/hg/show/hacks/techtalk2012?rev=429 … JDeveloper & ADF? How to service-enable Oracle ADF Business Components http://ht.ly/1mcfsZ OracleSupport_WLS? Do you know that #WebLogic 12.1.1.0 is certified for production with JDK 7? @ http://pub.vitrue.com/35Kn Andreas Koop? My latest upload : WebLogic Administration und Deployment mit WLST on @slideshare http://www.slideshare.net/multikoop/weblogic-administration-und-deployment-mit-wlst … OTNArchBeat? Demo for OPN: Oracle Coherence Management with EM Cloud Control 12c http://pub.vitrue.com/reoo Markus Eisele? [blog] Java Champions at #JavaOne 2012 http://goo.gl/fb/Ibb6N #javachampion OracleBlogs Buy This Book!: Oracle Exalogic Elastic Cloud Handbook http://ow.ly/1malM1 WebLogic Community? Coherence Management with EM Cloud Control 12c &ndash;demo for partners http://wp.me/p1LMIb-iE Arun Gupta? Learn how Java can help Internet of Things at Java Embedded at JavaOne: http://bit.ly/POBizh WebLogic Community? Follow WebLogicCommunity on facebook http://www.facebook.com/WebLogicCommunity … #WebLogicCommunity WebLogic Community? Building Java EE in the Cloud–Webcast August 30th 2012 https://weblogiccommunity.wordpress.com/2012/08/27/building-java-ee-in-the-cloudwebcast-august-30th-2012/ … #WebLogicCommunity #Java #oracle #opn WebLogic Community? Call for WebLogic Community newsletter content. Please send @wlscommunity #WebLogicCommunity OracleSupport_WLS? The #weblogic wasp: lots of tips, Q&A and examples http://pub.vitrue.com/v0bw Frank Nimphius? Free ADF Best Practices Webinar by Andrejus Baranovskis for ODTUG (18, 2012 12:00 PM - 1:00 PM EDT) http://bit.ly/OiSWbi ADF Code Corner Webcast- Friday September 14, 8:30 AM - 9.00 AM (CET) - ADF as a basis of Fusion Apps (in English) - with Chris Muir: http://bit.ly/OiQVMb Oracle WebLogic? New blog post: Developing Custom User Principal Object http://pub.vitrue.com/ltam JAX London? Just 4days left to get in on the early bird special, don't miss out!! http://jaxlondon.com/ #JAXLondon #Java WebLogic Community Building Java EE in the Cloud&ndash;Webcast August 30th 2012 http://wp.me/p1LMIb-kE Andrejus Baranovskis? New Record Master-Detail Validation and ADF BC Groovy Use Case http://fb.me/1D2NEIl8g JAX London? Don't miss out!!! Only 6 days left to make use of our early bird offer #JAXLondon #JAVA http://jaxlondon.com/ Michel Schildmeijer Qualogy launches Proof of Concept Center for Oracle Fusion Applications http://www.qualogy.com/qualogy-launches-proof-of-concept-center-for-oracle-fusion-applications/ … via @Qualogy_news OracleSupport_WLS ?Need to troubleshoot redeployment failure in #Weblogic? Check this http://pub.vitrue.com/auhz OracleEnterpriseMgr? Blog : Managing Oracle #Exalogic Elastic #Cloud with Oracle Enterprise Manager Ops Center http://ow.ly/dd40e #em12c ODTUG? Want free advanced technical ADF training?Join @andrejusb for an @odtug webinar! check out his blog for more info http://bit.ly/SvKJDq chriscmuir Oracle Open World 2012 and ADF EMG http://zite.to/QyusZE OTNArchBeat? Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci http://pub.vitrue.com/v3aJ WebLogic Community? Presentations & Training material OFM Summer Camps & Impressions & Feedback http://wp.me/p1LMIb-ks Arun Gupta? Java EE 6 pocket guide by O'Reilly available for pre-order from Amazon: http://amzn.to/O6YyoP and B&N: http://bit.ly/NjWLk1 OTNArchBeat Joining the Existing Cluster in Coherence | A. Fuat Sungur http://pub.vitrue.com/6uLh Andrejus Baranovskis Sample Application for Switching Application Module Data Sources http://fb.me/1PSURUzch OTNArchBeat Oracle WebLogic DevCast: Building Java EE in the Cloud - August 30 - 10am PT/ 1pm ET http://pub.vitrue.com/xXg0 OTNArchBeat? GlassFish Community Event at #javaone - Sept 30 -11am – 1pm -Moscone South. Register Now! http://pub.vitrue.com/p2f5 OracleSupport_WLS? Connecting To HTTPS Site Using Simple Java Program When Using Proxy http://pub.vitrue.com/stVv Michel Schildmeijer? Before you go to #OOW take the sneak preview of WebLogic 12c with you: http://www.qualogy.com/ga-nog-niet-naar-oow-en-neem-mee-weblogic-12c/ … via @Qualogy_news Simon Haslam? Even more great ADF content at #oow2012 this year including a packed ADF EMG day on Sunday: https://blogs.oracle.com/onesizedoesntfitall/entry/the_year_after_the_year … OracleBlogs ExaLogic trainings for partners http://ow.ly/1m6a5D Robin? First presentation on DOAG conference (thanks to @Steffen2042) "Weblogic Server for Dummies". Now I´m pretty excited :) http://www.doag.org/de/events/konferenzen/doag-2012.html … Markus Eisele There is a #facebook page for the upcoming #Java Mission Control (JRockit Mission Control for #Hotspot)! ttp://on.fb.me/Q31oyA Adam Bien? The almost free #javaee workshop in Rapperswil has only 60 registrations so far: http://www.adam-bien.com/roller/abien/entry/enterprise_java_2_0_swiss … What's the problem? :-) WebLogic Community ExaLogic trainings for partners http://wp.me/p1LMIb-iC OracleBlogs How to install Oracle Weblogic Server using Generic Package installer? http://ow.ly/1m5ms7 OracleSupport_WLS #Weblogic Server new blog post - Developing Custom User Principal Object http://pub.vitrue.com/ltam OracleBlogs? Architects and Architecture at JavaOne 2012 http://ow.ly/1m4oS5 WebLogic Community Are you WebLogic or Application Grid Specialized? Do you get Recognized? Get your plaque https://weblogiccommunity.wordpress.com/2012/08/21/plaques-weblogic-application-grid-specialization/ … #WebLogicCommunity #opn WebLogic Community? Plaques WebLogic & Application Grid Specialization http://wp.me/p1LMIb-iA JDeveloper & ADF? First Steps With Oracle Application Testing Suite: Recording a Test With OpenScript http://dlvr.it/222npy WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: twitter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • CodePlex Daily Summary for Sunday, July 14, 2013

    CodePlex Daily Summary for Sunday, July 14, 2013Popular ReleasesVidCoder: 1.4.23: New in 1.4.23 Added French translation. Fixed non-x264 video encoders not sticking in video tab. New in 1.4 Updated HandBrake core to 0.9.9 Blu-ray subtitle (PGS) support Additional framerates: 30, 50, 59.94, 60 Additional sample rates: 8, 11.025, 12 and 16 kHz Additional higher bitrates for audio Same as Source Constant Framerate 24-bit FLAC encoding Added Windows Phone 8 and Apple TV 3 presets Introduced process isolation for encodes. Now if HandBrake crashes, VidCoder will ...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.96: Fix for issue #19957: EXE should output the name of the file(s) being minified. Discussion #449181: throw a Sev-2 warning when trailing commas are detected on an Array literal. Perfectly legal to do so, but the behavior ends up working differently on different browsers, so throw a cross-browser warning. Add a few more known global names for improved ES6 compatibility update Nuget package to version 2.5 and automatically add the AjaxMin.targets to your project when you update the package...JSLint.NET: JSLint.NET 0.9.1 Beta: Version 0.9.1 Beta includes: JSLint.NET Console: Console help available using the /? switch (or no arguments). See the Console Options page for more. JSLint.NET for Visual Studio: Support linked JSLintNet.json file. More about this file on the JSLint.NET Settings page. Hide un-implemented menu items. Prefix JSLint errors in the task list. JSLint.NET Core: Allow ignoring of individual files in JSLintNet.json.TypePipe: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of the TypePipe for .NET 4.5. Find the complete release notes for the build here: Release Notes.re-linq: 1.15.2.0 (.NET 4.5): This is build 1.15.2.0 of re-linq for .NET 4.5. Find the complete release notes for the build here: Release Notes To use re-linq with .NET 3.5, use a 1.13.x build.Columbus Remote Desktop: 2.0 Sapphire: Added configuration settings Added update notifications Added ability to disable GPU acceleration Fixed connection bugsSearch for Team Foundation Server workitems changes: Release 1.2: - Issue 1184 fixed, - Changeset's comboboxes sorted by Id (From : Ascending - To : Descending) - Application window iconImpulse Media Player: Impulse Media Player 3.5.0.1: Fixed a crash that occurs when copying data from lastfm to file panelPhoneGuitarTab: Release 1.1: Improved UX. Simplified navigation. More performance improvements coming soon.The GLMET Project: Get OS Version: --DataDevelop: Beta 0.6.5: Hotfix bug in Python Table.ImportAll method Updated External Libraries Fixes in Excel Exportation Modify ConnectionString refreshes the Properties Window correctlyUser Group Labs: User Group Data: 01.00.00: This release has the following updates and new features: Initial release with a minimal feature set Easy to use (just add to the social group details page) Edit common user group properties System Requirements DNN v07.00.02 or newer .Net Framework v4.0 or newerCarrotCake, an ASP.Net WebForms CMS: Binaries and PDFs - Zip Archive (v. 4.3 20130709): Product documentation and additional templates for this version is included in the zip archive, or if you want individual files, visit the http://www.carrotware.com website. Templates, in addition to those found in the download, can be downloaded individually from the website as well. If you are coming from earlier versions, make a precautionary backup of your existing website files and database. When installing the update, the database update engine will create the new schema items (if you...Dalmatian Build Script: Dalmatian Build 0.1.3.0: -Minor bug fixes -Added Choose<T> and ChooseYesNo to Console objectPushover.NET: Pushover.NET - Stable Release 10 July 2013: This is the first stable release of Pushover.NET. It includes 14 overloads of the SendNotification method, giving you total flexibility of sending Pushover notifications from your client. Assembly is built to .NET 2.x so it can be called from .NET 2.x, 3.x and 4.x projects. Also available is the Test Harness. This is a small GUI that you can use to test calls to Pushover.NET's main DLL. It's almost fully functional--the sound effects haven't been fully configured so no matter what you pick ...MCEBuddy 2.x: MCEBuddy 2.3.14: 2.3.14 BETA is available through the Early Access Program.Click here https://mcebuddy2x.codeplex.com/discussions/439439 for details and to get access to Early Access Program to download latest releases. Changelog for 2.3.14 (32bit and 64bit) NEW FEATURES: 1. ENHANCEMENTS: 2. Improved eMail notifications 3. Improved metrics details 4. Support for larger history (INI) file (about 45,000 sections, each section can have about 1500 entries) BUG FIXES: 5. Fix for extracting Movie release year from...Azure Depot: Flask: Flask Version 01LINQ to Twitter: LINQ to Twitter v2.1.07: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.DotNetNuke® Community Edition CMS: 06.02.08: Major Highlights Fixed issue where the application throws an Unhandled Error and an HTTP Response Code of 200 when the connection to the database is lost. Security FixesNone Updated Modules/Providers ModulesNone ProvidersNoneModern UI for WPF: Modern UI 1.0.5: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. BREAKING CHANGE This version is backwards incompatible. ModernDialog.ShowMessage returns MessageBoxResult instead of bool? Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extensi...New ProjectsA Domain-Driven Design Framework for .Net: A .Net framework library for applying the domain-driven design approach to develop business software.a Linq to Workitem provider: Wilinq is a linq to workitem provider. It also contains WIQL to expression tree parser. Wilinq is based on the the fissum project source codeApprentice for WP: Apprentice for WPArgo New Deal: Data Type DBL DAL UI ToolsC# Practice: C# PracticeDardemEvo: summaryDavid.A.Zhang: Personal class LibrayEnglish Practice Helper: English Practice Helper is a C# window form application for everyone want to practice writing,speaking,listening and reading skill with your OWN computerFinancialManagement: FinancialManagementGoAgent GUI: GoAgent??????。GoAgent: https://code.google.com/p/goagentIndustrial Programming: Industrial Programming approaches tips (it's old and in russian language)ISS.IR.RRN-MS: Summary Tany :PLifeDataManager: Web project to manage some dataMixERP - ERP Solution That Sucks Less: A humble ERP solution that does not scare the users, MixERP is a purely mult-establishment and multi-currency solution.Nokia Portal: Install Nokia, HTC and LG apps on any WP8 devicePenn State SWENG 581 Team 5 Su13.2: This project is an academic extension of the NClass project found at http://nclass.sourceforge.net for the purposes of software testing and quality assurance.Pomp: testProfessor Oak's Pokemon Library DotNet: The Professor Oak's Pokemon Library is a .NET class library that aims to help programmers, by providing different tools to modify the game memory.Pure Music Player: Pure Music PlayerRandomly Balanced Trees: C# Implementations of Treap and Skiplist data structures. Which are representations of randomly balanced binary trees.ReoScript: JavaScript-like script language engine for .NET application. Easy to plug in .NET program and make API extension for script. SQL Queries: This is for all developers help.SqlSetup: This project create SQL server database automatically. Truco Pythons: Truco Argentino (Argentine truc), is a card game developed in python by Argentine programmers of the UNGS (General Sarmiento National University). WebServer .NET: Projekt zawiera oprogramowanie i zestaw narzedzi do zarzadzania serwerem http. Posiada wiele funkcjonalnosci ulatwiajacych korzystanie i konfigurowanie serwera.workspaces: solr exampleWP8NativeAccess: Win32 API wrappers for Windows Phone 8. Intended to be used in WP8 WinRT apps. Includes FileSystem project.

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • Down Tools Week Cometh: Kissing Goodbye to CVs/Resumes and Cover Letters

    - by Bart Read
    I haven't blogged about what I'm doing in my (not so new) temporary role as Red Gate's technical recruiter, mostly because it's been routine, business as usual stuff, and because I've been trying to understand the role by doing it. I think now though the time has come to get a little more radical, so I'm going to tell you why I want to largely eliminate CVs/resumes and cover letters from the application process for some of our technical roles, and why I think that might be a good thing for candidates (and for us). I have a terrible confession to make, or at least it's a terrible confession for a recruiter: I don't really like CV sifting, or reading cover letters, and, unless I've misread the mood around here, neither does anybody else. It's dull, it's time-consuming, and it's somewhat soul destroying because, when all is said and done, you're being paid to be incredibly judgemental about people based on relatively little information. I feel like I've dirtied myself by saying that - I mean, after all, it's a core part of my job - but it sucks, it really does. (And, of course, the truth is I'm still a software engineer at heart, and I'm always looking for ways to do things better.) On the flip side, I've never met anyone who likes writing their CV. It takes hours and hours of faffing around and massaging it into shape, and the whole process is beset by a gnawing anxiety, frustration, and insecurity. All you really want is a chance to demonstrate your skills - not just talk about them - and how do you do that in a CV or cover letter? Often the best candidates will include samples of their work (a portfolio, screenshots, links to websites, product downloads, etc.), but sometimes this isn't possible, or may not be appropriate, or you just don't think you're allowed because of what your school/university careers service has told you (more commonly an issue with grads, obviously). And what are we actually trying to find out about people with all of this? I think the common criteria are actually pretty basic: Smart Gets things done (thanks for these two Joel) Not an a55hole* (sorry, have to get around Simple Talk's swear filter - and thanks to Professor Robert I. Sutton for this one) *Of course, everyone has off days, and I don't honestly think we're too worried about somebody being a bit grumpy every now and again. We can do a bit better than this in the context of the roles I'm talking about: we can be more specific about what "gets things done" means, at least in part. For software engineers and interns, the non-exhaustive meaning of "gets things done" is: Excellent coder For test engineers, the non-exhaustive meaning of "gets things done" is: Good at finding problems in software Competent coder Team player, etc., to me, are covered by "not an a55hole". I don't expect people to be the life and soul of the party, or a wild extrovert - that's not what team player means, and it's not what "not an a55hole" means. Some of our best technical staff are quiet, introverted types, but they're still pleasant to work with. My problem is that I don't think the initial sift really helps us find out whether people are smart and get things done with any great efficacy. It's better than nothing, for sure, but it's not as good as it could be. It's also contentious, and potentially unfair/inequitable - if you want to get an idea of what I mean by this, check out the background information section at the bottom. Before I go any further, let's look at the Red Gate recruitment process for technical staff* as it stands now: (LOTS of) People apply for jobs. All these applications go through a brutal process of manual sifting, which eliminates between 75 and 90% of them, depending upon the role, and the time of year**. Depending upon the role, those who pass the sift will be sent an assessment or telescreened. For the purposes of this blog post I'm only interested in those that are sent some sort of programming assessment, or bug hunt. This means software engineers, test engineers, and software interns, which are the roles for which I receive the most applications. The telescreen tends to be reserved for project or product managers. Those that pass the assessment are invited in for first interview. This interview is mostly about assessing their technical skills***, although we're obviously on the look out for cultural fit red flags as well. If the first interview goes well we'll invite candidates back for a second interview. This is where team/cultural fit is really scoped out. We also use this interview to dive more deeply into certain areas of their skillset, and explore any concerns that may have come out of the first interview (these obviously won't have been serious or obvious enough to cause a rejection at that point, but are things we do need to look into before we'd consider making an offer). We might subsequently invite them in for lunch before we make them an offer. This tends to happen when we're recruiting somebody for a specific team and we'd like them to meet all the people they'll be working with directly. It's not an interview per se, but can prove pivotal if they don't gel with the team. Anyone who's made it this far will receive an offer from us. *We have a slightly quirky definition of "technical staff" as it relates to the technical recruiter role here. It includes software engineers, test engineers, software interns, user experience specialists, technical authors, project managers, product managers, and development managers, but does not include product support or information systems roles. **For example, the quality of graduate applicants overall noticeably drops as the academic year wears on, which is not to say that by now there aren't still stars in there, just that they're fewer and further between. ***Some organisations prefer to assess for team fit first, but I think assessing technical skills is a more effective initial filter - if they're the nicest person in the world, but can't cut a line of code they're not going to work out. Now, as I suggested in the title, Red Gate's Down Tools Week is upon us once again - next week in fact - and I had proposed as a project that we refactor and automate the first stage of marking our programming assessments. Marking assessments, and in fact organising the marking of them, is a somewhat time-consuming process, and we receive many assessment solutions that just don't make the cut, for whatever reason. Whilst I don't think it's possible to fully automate marking, I do think it ought to be possible to run a suite of automated tests over each candidate's solution to see whether or not it behaves correctly and, if it does, move on to a manual stage where we examine the code for structure, decomposition, style, readability, maintainability, etc. Obviously it's possible to use tools to generate potentially helpful metrics for some of these indices as well. This would obviously reduce the marking workload, and would provide candidates with quicker feedback about whether they've been successful - though I do wonder if waiting a tactful interval before sending a (nicely written) rejection might be wise. I duly scrawled out a picture of my ideal process, which looked like this: The problem is, as soon as I'd roughed it out, I realised that fundamentally it wasn't an ideal process at all, which explained the gnawing feeling of cognitive dissonance I'd been wrestling with all week, whilst I'd been trying to find time to do this. Here's what I mean. Automated assessment marking, and the associated infrastructure around that, makes it much easier for us to deal with large numbers of assessments. This means we can be much more permissive about who we send assessments out to or, in other words, we can give more candidates the opportunity to really demonstrate their skills to us. And this leads to a question: why not give everyone the opportunity to demonstrate their skills, to show that they're smart and can get things done? (Two or three of us even discussed this in the down tools week hustings earlier this week.) And isn't this a lot simpler than the alternative we'd been considering? (FYI, this was automated CV/cover letter sifting by some form of textual analysis to ideally eliminate the worst 50% or so of applications based on an analysis of the 20,000 or so historical applications we've received since 2007 - definitely not the basic keyword analysis beloved of recruitment agencies, since this would eliminate hardly anyone who was awful, but definitely would eliminate stellar Oxbridge candidates - #fail - or some nightmarishly complex Google-like system where we profile all our currently employees, only to realise that we're never going to get representative results because we don't have a statistically significant sample size in any given role - also #fail.) No, I think the new way is better. We let people self-select. We make them the masters (or mistresses) of their own destiny. We give applicants the power - we put their fate in their hands - by giving them the chance to demonstrate their skills, which is what they really want anyway, instead of requiring that they spend hours and hours creating a CV and cover letter that I'm going to evaluate for suitability, and make a value judgement about, in approximately 1 minute (give or take). It doesn't matter what university you attended, it doesn't matter if you had a bad year when you took your A-levels - here's your chance to shine, so take it and run with it. (As a side benefit, we cut the number of applications we have to sift by something like two thirds.) WIN! OK, yeah, sounds good, but will it actually work? That's an excellent question. My gut feeling is yes, and I'll justify why below (and hopefully have gone some way towards doing that above as well), but what I'm proposing here is really that we run an experiment for a period of time - probably a couple of months or so - and measure the outcomes we see: How many people apply? (Wouldn't be surprised or alarmed to see this cut by a factor of ten.) How many of them submit a good assessment? (More/less than at present?) How much overhead is there for us in dealing with these assessments compared to now? What are the success and failure rates at each interview stage compared to now? How many people are we hiring at the end of it compared to now? I think it'll work because I hypothesize that, amongst other things: It self-selects for people who really want to work at Red Gate which, at the moment, is something I have to try and assess based on their CV and cover letter - but if you're not that bothered about working here, why would you complete the assessment? Candidates who would submit a shoddy application probably won't feel motivated to do the assessment. Candidates who would demonstrate good attention to detail in their CV/cover letter will demonstrate good attention to detail in the assessment. In general, only the better candidates will complete and submit the assessment. Marking assessments is much less work so we'll be able to deal with any increase that we see (hopefully we will see). There are obviously other questions as well: Is plagiarism going to be a problem? Is there any way we can detect/discourage potential plagiarism? How do we assess candidates' education and experience? What about their ability to communicate in writing? Do we still want them to submit a CV afterwards if they pass assessment? Do we want to offer them the opportunity to tell us a bit about why they'd like the job when they submit their assessment? How does this affect our relationship with recruitment agencies we might use to hire for these roles? So, what's the objective for next week's Down Tools Week? Pretty simple really - we want to implement this process for the Graduate Software Engineer and Software Engineer positions that you can find on our website. I will be joined by a crack team of our best developers (Kevin Boyle, and new Red-Gater, Sam Blackburn), and recruiting hostess with the mostest Laura McQuillen, and hopefully a couple of others as well - if I can successfully twist more arms before Monday.* Hopefully by next Friday our experiment will be up and running, and we may have changed the way Red Gate recruits software engineers for good! Stay tuned and we'll let you know how it goes! *I'm going to play dirty by offering them beer and chocolate during meetings. Some background information: how agonising over the initial CV/cover letter sift helped lead us to bin it off entirely The other day I was agonising about the new university/good degree grade versus poor A-level results issue, and decided to canvas for other opinions to see if there was something I could do that was fairer than my current approach, which is almost always to reject. This generated quite an involved discussion on our Yammer site: I'm sure you can glean a pretty good impression of my own educational prejudices from that discussion as well, although I'm very open to changing my opinion - hopefully you've already figured that out from reading the rest of this post. Hopefully you can also trace a logical path from agonising about sifting to, "Uh, hang on, why on earth are we doing this anyway?!?" Technorati Tags: recruitment,hr,developers,testers,red gate,cv,resume,cover letter,assessment,sea change

    Read the article

  • Reading xml document in firefox

    - by Searock
    I am trying to read customers.xml using javascript. My professor has taught us to read xml using `ActiveXObjectand he has given us an assignment to create a sample login page which checks username and password by reading customers.xml. I am trying to use DOMParser so that it works with firefox. But when I click on Login button I get this error. Error: syntax error Source File: file:///C:/Users/Searock/Desktop/home/project/project/login.html Line: 1, Column: 1 Source Code: customers.xml Here's my code. login.js var xmlDoc = 0; function checkUser() { var user = document.login.txtLogin.value; var pass = document.login.txtPass.value; //xmlDoc = new ActiveXObject("Microsoft.XMLDOM"); /* xmlDoc = document.implementation.createDocument("","",null); xmlDoc.async = "false"; xmlDoc.onreadystatechange = redirectUser; xmlDoc.load("customers.xml"); */ var parser = new DOMParser(); xmlDoc = parser.parseFromString("customers.xml", "text/xml"); alert(xmlDoc.documentElement.nodeName); xmlDoc.async = "false"; xmlDoc.onreadystatechange = redirectUser; } function redirectUser() { alert(''); var user = document.login.txtLogin.value; var pass = document.login.txtPass.value; var log = 0; if(xmlDoc.readyState == 4) { xmlObj = xmlDoc.documentElement; var len = xmlObj.childNodes.length; for(i = 0; i < len; i++) { var nodeElement = xmlObj.childNodes[i]; var userXml = nodeElement.childNodes[0].firstChild.nodeValue; var passXml = nodeElement.childNodes[1].firstChild.nodeValue; var idXML = nodeElement.attributes[0].value if(userXml == user && passXml == pass) { log = 1; document.cookie = escape(idXML); document.login.submit(); } } } if(log == 0) { var divErr = document.getElementById('Error'); divErr.innerHTML = "<b>Login Failed</b>"; } } customers.xml <?xml version="1.0" encoding="UTF-8"?> <customers> <customer custid="CU101"> <user>jack</user> <pwd>PW101</pwd> <email>[email protected]</email> </customer> <customer custid="CU102"> <user>jill</user> <pwd>PW102</pwd> <email>[email protected]</email> </customer> <customer custid="CU103"> <user>john</user> <pwd>PW103</pwd> <email>[email protected]</email> </customer> <customer custid="CU104"> <user>jeff</user> <pwd>PW104</pwd> <email>[email protected]</email> </customer> </customers> I get parsererror message on line alert(xmlDoc.documentElement.nodeName); I don't know what's wrong with my code. Can some one point me in a right direction? Edit : Ok, I found a solution. var xmlDoc = 0; var xhttp = 0; function checkUser() { var user = document.login.txtLogin.value; var pass = document.login.txtPass.value; var err = ""; if(user == "" || pass == "") { if(user == "") { alert("Enter user name"); } if(pass == "") { alert("Enter Password"); } return; } if (window.XMLHttpRequest) { xhttp=new XMLHttpRequest(); } else // IE 5/6 { xhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xhttp.onreadystatechange = redirectUser; xhttp.open("GET","customers.xml",true); xhttp.send(); } function redirectUser() { var log = 2; var user = document.login.txtLogin.value; var pass = document.login.txtPass.value; if (xhttp.readyState == 4) { log = 0; xmlDoc = xhttp.responseXML; var xmlUsers = xmlDoc.getElementsByTagName('user'); var xmlPasswords = xmlDoc.getElementsByTagName('pwd'); var userLen = xmlDoc.getElementsByTagName('customer').length; var xmlCustomers = xmlDoc.getElementsByTagName('customer'); for (var i = 0; i < userLen; i++) { var xmlUser = xmlUsers[i].childNodes[0].nodeValue; var xmlPass = xmlPasswords[i].childNodes[0].nodeValue; var xmlId = xmlCustomers.item(i).attributes[0].nodeValue; if(xmlUser == user && xmlPass == pass) { log = 1; document.cookie = xmlId; document.login.submit(); break; } } } if(log == 0) { alert("Login failed"); } } Thanks.

    Read the article

  • Maddening Linked List problem

    - by Mike
    This has been plaguing me for weeks. It's something really simple, I know it. Every time I print a singly linked list, it prints an address at the end of the list. #include <iostream> using namespace std; struct node { int info; node *link; }; node *before(node *head); node *after(node *head); void middle(node *head, node *ptr); void reversep(node *head, node *ptr); node *head, *ptr, *newnode; int main() { head = NULL; ptr = NULL; newnode = new node; head = newnode; for(int c1=1;c1<11;c1++) { newnode->info = c1; ptr = newnode; newnode = new node; ptr->link = newnode; ptr = ptr->link; } ptr->link=NULL; head = before(head); head = after(head); middle(head, ptr); //reversep(head, ptr); ptr = head; cout<<ptr->info<<endl; while(ptr->link!=NULL) { ptr=ptr->link; cout<<ptr->info<<endl; } system("Pause"); return 0; } node *before(node *head) { node *befnode; befnode = new node; cout<<"What should go before the list?"<<endl; cin>>befnode->info; befnode->link = head; head = befnode; return head; } node *after(node *head) { node *afnode, *ptr2; afnode = new node; ptr2 = head; cout<<"What should go after the list?"<<endl; cin>>afnode->info; ptr2 = afnode; afnode->link=NULL; ptr2 = head; return ptr2; } void middle(node *head, node *ptr) { int c1 = 0, c2 = 0; node *temp, *midnode; ptr = head; while(ptr->link->link!=NULL) { ptr=ptr->link; c1++; } c1/=2; c1-=1; ptr = head; while(c2<c1) { ptr=ptr->link; c2++; } midnode = new node; cout<<"What should go in the middle of the list?"<<endl; cin>>midnode->info; cout<<endl; temp=ptr->link; ptr->link=midnode; midnode->link=temp; } void reversep(node *head, node *ptr) { node *last, *ptr2; ptr=head; ptr2=head; while(ptr->link!=NULL) ptr = ptr->link; last = ptr; cout<<last->info; while(ptr!=head) { while(ptr2->link!=ptr) ptr2=ptr2->link; ptr = ptr2; cout<<ptr->info; } } I'll admit that this is class work, but even the professor can't figure it out, and says that its probably something insignificant that we're overlooking, but I can't put my mind to rest until I find out what it is.

    Read the article

  • C++ Operator overloading - 'recreating the Vector'

    - by Wallter
    I am currently in a collage second level programing course... We are working on operator overloading... to do this we are to rebuild the vector class... I was building the class and found that most of it is based on the [] operator. When I was trying to implement the + operator I run into a weird error that my professor has not seen before (apparently since the class switched IDE's from MinGW to VS express...) (I am using Visual Studio Express 2008 C++ edition...) Vector.h #include <string> #include <iostream> using namespace std; #ifndef _VECTOR_H #define _VECTOR_H const int DEFAULT_VECTOR_SIZE = 5; class Vector { private: int * data; int size; int comp; public: inline Vector (int Comp = 5,int Size = 0) : comp(Comp), size(Size) { if (comp > 0) { data = new int [comp]; } else { data = new int [DEFAULT_VECTOR_SIZE]; comp = DEFAULT_VECTOR_SIZE; } } int size_ () const { return size; } int comp_ () const { return comp; } bool push_back (int); bool push_front (int); void expand (); void expand (int); void clear (); const string at (int); int operator[ ](int); Vector& operator+ (Vector&); Vector& operator- (const Vector&); bool operator== (const Vector&); bool operator!= (const Vector&); ~Vector() { delete [] data; } }; ostream& operator<< (ostream&, const Vector&); #endif Vector.cpp #include <iostream> #include <string> #include "Vector.h" using namespace std; const string Vector::at(int i) { this[i]; } void Vector::expand() { expand(size); } void Vector::expand(int n ) { int * newdata = new int [comp * 2]; if (*data != NULL) { for (int i = 0; i <= (comp); i++) { newdata[i] = data[i]; } newdata -= comp; comp += n; delete [] data; *data = *newdata; } else if ( *data == NULL || comp == 0) { data = new int [DEFAULT_VECTOR_SIZE]; comp = DEFAULT_VECTOR_SIZE; size = 0; } } bool Vector::push_back(int n) { if (comp = 0) { expand(); } for (int k = 0; k != 2; k++) { if ( size != comp ){ data[size] = n; size++; return true; } else { expand(); } } return false; } void Vector::clear() { delete [] data; comp = 0; size = 0; } int Vector::operator[] (int place) { return (data[place]); } Vector& Vector::operator+ (Vector& n) { int temp_int = 0; if (size > n.size_() || size == n.size_()) { temp_int = size; } else if (size < n.size_()) { temp_int = n.size_(); } Vector newone(temp_int); int temp_2_int = 0; for ( int j = 0; j <= temp_int && j <= n.size_() && j <= size; j++) { temp_2_int = n[j] + data[j]; newone[j] = temp_2_int; } //////////////////////////////////////////////////////////// return newone; //////////////////////////////////////////////////////////// } ostream& operator<< (ostream& out, const Vector& n) { for (int i = 0; i <= n.size_(); i++) { //////////////////////////////////////////////////////////// out << n[i] << " "; //////////////////////////////////////////////////////////// } return out; } Errors: out << n[i] << " "; error C2678: binary '[' : no operator found which takes a left-hand operand of type 'const Vector' (or there is no acceptable conversion) return newone; error C2106: '=' : left operand must be l-value As stated above, I am a student going into Computer Science as my selected major I would appreciate tips, pointers, and better ways to do stuff :D

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

< Previous Page | 5 6 7 8 9