Search Results

Search found 34813 results on 1393 pages for 'im fine'.

Page 401/1393 | < Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >

  • Installing ubuntu on MacBook Pro 9,1

    - by pratnala
    I bought a MacBook Pro 9,1 (Mid 2012, 15inch, NOT retina). Can anyone tell me how to install Ubuntu 12.04 on it? I already have Windows installed via BootCamp. Also, where to install drivers for Ubuntu and all? This link https://help.ubuntu.com/community/MacBookPro has nothing for 9,1 so please help me. If I have to remove BootCamp and reinstall Windows manually because BootCamp doesn't allow resizing of partitions, that's fine. Please tell me how to install Precise on MBP 9,1 Thanks!

    Read the article

  • Sharing code in LGPL and proprietary software

    - by Martin
    Hi I'm working on a piece of software that'll be released as a dll under LGPL. The software interfaces with hardware from a small company that has provided me with the needed libraries and some code to use them correctly (not only headers but its all in a separate file). As far as i know, the same code is used in their proprietary software that they don't intend to open source but they'd be fine releasing the piece of code they've given me. Now here's the question: What license could be used on the code I got from the company? I guess using GPL or LGPL would make them violate GPL when using the same code in their other software. Is MIT a good idea? Is it ok to just include a file with MIT license on it in my otherwise LGPL:ed project? Since I'm not the copyright holder, I'd have to ask the company to apply the license obviously but that shouldn't be a problem. Thanks /Martin

    Read the article

  • The curious case(s) of the Microsoft product naming department

    - by AaronBertrand
    A long time ago, in a galaxy far, far away... Okay, it was here on earth, a little over 5 years ago. With SQL Server 2005, Microsoft introduced a very useful feature called the DAC. DAC stands for "dedicated administrator connection"... you can read about it here , but essentially, it allows you a single connection into the server with priority resource allocation - so you can actually get in and kill a rogue process that is otherwise taking over the server. On its own this was a fine acronym choice,...(read more)

    Read the article

  • Ubuntu doesn’t recognize Wacom Bamboo Eraser

    - by mudasobwa
    I have just connected Wacom Bamboo CTL-470K-RUPL to my Ubuntu 13.04 (up-to-date 10/26/2013). Everything works fine except an eraser. More precisely, nothing happens when I try to use it (even in GIMP with Preferences ? Input Devices ? Eraser configured). $ sudo dpkg reconfigure xserver-xorg-input-wacom doesn’t help either. How should I re-configure my Wacom tablet to enable eraser? Thanks in advance.

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Sprite Kit - containsPoint for SKPhysicsBody?

    - by gj15987
    I have a ball bouncing around the screen. I can pick it up and drag it onto a "bucket". When my touches finish, I use the containsPoint function to check and see if I have dropped the ball onto the bucket. This works fine, however, I actually want to check whether the ball is dropped onto the bucket node's physics body because my "bucket" is actually just an oval, and so I've applied a physics body which is the same shape as the oval, so that the white space around the oval isn't included in the physics simulation. I can't seem to find a "containsPoint" function for physics bodies. Can anyone advise on how I'd check for this? To summarise, I want to drop a node, onto a specific part of another node (or its physics body) and trigger an event. Thanks in advance.

    Read the article

  • Simplifying the process of compiling and running objective-c apps in GNUstep

    - by Matthew
    I just installed GNUstep (following this post: http://www.jaysonjc.com/programming/objective-c-programming-in-windows-gnustep-projectcenter.html) It says to run this code: gcc -o helloworld helloworld.m -I /GNUstep/System/Library/Headers -L /GNUstep/System/Library/Libraries -lobjc -lgnustep-base -fconstant-string-class=NSConstantString every time I want to compile. It works just fine for me. However as I'm learning and will be compiling/running apps way often (making little changes and trying again), I'd like a simpler way to do this. Is there an easier way to compile and then run the app? Or am I just being lazy?

    Read the article

  • How do I install Dan's Guardian on 12.04?

    - by Matt
    I'm trying to install Dans Guardian on a virtual machine. The instructions ask me to run the ./configure script and then execute the command make install. The configure script runs fine but the make install throws errors. Making all in src make[2]: Entering directory `/webmin/dansguardian-2.10/src' g++ -DHAVE_CONFIG_H -I. -I.. -D__CONFFILE='"/usr/local/etc/dansguardian/dansguardian.conf"' -D__LOGLOCATION='"/usr/local/var/log/dansguardian/"' -D__PIDDIR='"/usr/local/var/run"' -D__PROXYUSER='"nobody"' -D__PROXYGROUP='"nobody"' -D__CONFDIR='"/usr/local/etc/dansguardian"' -g -O2 -MT dansguardian-fancy.o -MD -MP -MF .deps/dansguardian-fancy.Tpo -c -o dansguardian-fancy.o `test -f 'downloadmanagers/fancy.cpp' || echo './'`downloadmanagers/fancy.cpp downloadmanagers/fancy.cpp: In member function âstd::string fancydm::timestring(int)â: downloadmanagers/fancy.cpp:507:72: error: âsnprintfâ was not declared in this scope make[2]: *** [dansguardian-fancy.o] Error 1 make[2]: Leaving directory `/webmin/dansguardian-2.10/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/webmin/dansguardian-2.10' make: *** [all] Error 2 I'm running 12.04 LTS server x64

    Read the article

  • Cannot open software sources after removing a PPA

    - by rkwbcca
    After deleting a ppa entry in the sources.list file, I was not able to open the software sources application. Opening the software centre is fine. I tried running gksudo software-properties-gtk and got the follwong message: SoftwareProperties.__init__(self, options=options, datadir=datadir) File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 96, in __init__ self.reload_sourceslist() File "/usr/lib/python2.7/dist-packages/softwareproperties/SoftwareProperties.py", line 580, in reload_sourceslist self.distro.get_sources(self.sourceslist) File "/usr/lib/python2.7/dist-packages/aptsources/distro.py", line 91, in get_sources raise NoDistroTemplateException("Error: could not find a " aptsources.distro.NoDistroTemplateException: Error: could not find a distribution template Would appreciate if you can let me know how to solve this problem.

    Read the article

  • Wifi will not connect with an Intel Wifi Link 1000

    - by Wallace Panter
    I have an HP dv7t-6100CTO laptop. I have linux ubuntu 12.04 lts dual booted. The specs of my laptop are: - i7-2620m dual core (2.7 ghz) processor - intel wifi link 1000 bgn wireless card I am having a hard time getting my laptop to connect to the internet through wireless. Ethernet is fine, but at college I need wireless the majority of the time. Can anyone help? Some steps I have done: I have tried WICD while uninstalling the Network Manager. When that didn't work, I uninstalled WICD and reinstalled Network Manager. I an not really sure what else to do.

    Read the article

  • JDBC Triggers

    - by Tim Dexter
    Received a question from a customer last week, they were using the new rollup patch on top of 10.1.3.4.1. What are these boxes for? Don't you know? Surely? Well, they are for ... that new functionality, you know it's in the user docs, that thingmabobby doodah. OK, I dont know either, I can have a guess but let me check first. Serveral IM sessions, emails and a dig through the readme for the new patch and I had my answer. Its not in the official documentation, yet. Leslie is on the case. The two fields were designed to allow an Admin to set a users context attributes before a connection is made to a database and for un-setting the attributes after the connection is broken by the extraction engine. We got a sample from the Enterprise Manager team on how they will be using it with their VPD connections. FUNCTION bip_to_em_user (user_name_in IN VARCHAR2) RETURN BOOLEAN IS BEGIN SETEMUSERCONTEXT(user_name_in, MGMT_USER.OP_SET_IDENTIFIER); return TRUE; END bip_to_em_user; And used in the jdbc data source definition like this (pre-process function): sysman.mgmt_bip.bip_to_em_user(:xdo_user_name) You, of course can call any function that is going to return a boolean value, another example might be. FUNCTION set_per_process_username (username_in IN VARCHAR2) RETURN BOOLEAN IS BEGIN SETUSERCONTEXT(username_in); return TRUE; END set_per_process_username Just use your own function/package to set some user context. Very grateful for the mail from Leslie on the EM team's usage but I had to try it out. Rather than set up a VPD, I opted for a simpler test. Can I log the comings and goings of users and their queries using the same pre-process text box. Reaching back into the depths of my developer brain to remember some pl/sql, it was not that deep and I came up with: CREATE OR REPLACE FUNCTION BIPTEST (user_name_in IN VARCHAR2, smode IN VARCHAR2) RETURN BOOLEAN AS BEGIN INSERT INTO LOGTAB VALUES(user_name_in, sysdate,smode); RETURN true; END BIPTEST; To call it in the pre-fetch trigger. BIPTEST(:xdo_user_name) Not going to set the pl/sql world alight I know, but you get the idea. As a new connection is made to the database its logged in the LOGTAB table. The SMODE value just sets if its an entry or an exit. I used the pre- and post- boxes. NAME UPDATE_DATE S_FLAG oracle 14-MAY-10 09.51.34.000000000 AM Start oracle 14-MAY-10 10.23.57.000000000 AM Finish administrator 14-MAY-10 09.51.38.000000000 AM Start administrator 14-MAY-10 09.51.38.000000000 AM Finish oracle 14-MAY-10 09.51.42.000000000 AM Start oracle 14-MAY-10 09.51.42.000000000 AM Finish It works very well, I had some fun trying to find a nasty query for the extraction engine so that the timestamps from in to out actually had a difference. That engine is fast! The only derived value you can pass from BIP is :xdo_user_name. None of the other server values are available. Connection pools are not currently supported but planned for a future release. Now you know what those fields are for and look for some official documentation, rather than my ramblings, coming soon!

    Read the article

  • Keyboard Function Keys Do Not Work

    - by Anthony Burman
    I use the Microsoft Natural MultiMedia Keyboard 1.0A. The keyboard is not a wireless board. The Escape button and the function keys have never worked. I am currently running on 10.10. On previous incarnations the keys never worked either. However a recent journey through all the Microsoft options in System Preference Keyboard Layouts suggested that the Escape button could be functional. The current setting is Generic 105-key (Intl) PC. Can I find out whether the keys can be made to work or not? Of the top buttons, nothing happens when I press My Documents; a small red cross appears at the top right of the screen when I press My Pictures and the Media, Mail and Web/Home buttons work just fine. Thanks, Anthony.

    Read the article

  • How do I use with LTSP with a Dell FX170 thin client?

    - by v4169sgr
    Just got a reconditioned DELL Optiplex FX170 thin client delivered, without an image. On power-up, I see the message: DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER I am very sure all the wires etc are connected properly. I would like it to PXE boot and find my LTSP installation. I have an HP T5525 with the same connectivity that works fine. On F12 there's a BIOS boot order menu. The only options available though are LS120, the HDD, CD / DVD [via USB presumably], and various USB options. I do not see 'PXE' or any network card. And I don't know how to change the boot order :( Any help appreciated!

    Read the article

  • Any recommended forum software for commercial/busniess discussion

    - by Nick
    I am looking to create a commercial or business trading discussion board. I plan to get a forum software to install to my hosting. My main concern is the security, as I need to ensure the discussion info is shared safely among the board members only, not lack to outsiders. Cost whether Free/Paid is fine with me. Feature: Account management, Theme of forum is customizable On my mind, I have phpBB, Simple machines, bbPress but I am not sure which will fill my need. Thus, please advise what forum software will be the best choice and recommended. Thank you

    Read the article

  • dovecot can't compact mail folder /var/mail/username

    - by G. He
    ubuntu 11.10 32bit. Setup a dovecot imap server. Using Thunderbird on a different ubuntu machine (64bit) to access imap server. Everything else is fine, except I can not compact the deleted email in inbox, which is stored at /var/mail/username. Checking mail.log and I see this error message: Apr 3 00:10:11 autumn dovecot: imap(username): Error: file_dotlock_create(/var/mail/username) failed: Permission denied (euid=1000(username) egid=1000(username) missing +w perm: /var/mail, euid is not dir owner) (set mail_privileged_group=mail) what is wrong with the permission? Here are the permissions for the relevant files: $ ls -ld /var/mail drwxrwsr-x 2 mail mail 4096 2012-04-02 23:36 /var/mail $ ls -l /var/mail/username -rw------- 1 username mail 417 2012-04-02 23:36 /var/mail/username Anyone knows what's going on here?

    Read the article

  • Internet Explorer 9 Cannot open file on download CTRL + J doesn’t work can’t open list of downloads

    - by simonsabin
    If any of the above symptoms are causing a problem, i.e. 1. You download a file and the download dialog disappears. 2. You select open when you download a file and nothing happens. 3. The View Downloads doesn’t work 4. CTRL + J doesn’t work (view downloads) The solution is to clear your download history See IE9 - View downloads / Ctrl+J do not open. I cannot open any file. But SAVE function still work fine. 64 bit version. for details the answer is provided by Steven. S on June 20th. I hope that...(read more)

    Read the article

  • Ubuntu 12.04 transmission-daemon and zfsonlinux: bad file descriptor and corrupt pieces

    - by Ivailo Karamanolev
    I'm running a Ubuntu 12.04 with zfsonlinux and transmission-daemon. The issues: sporadic Bad File Descriptor and Piece #xxx is corrupt errors. After I recheck the torrent, everything seems fine. That happens only when downloading: once it's in seeding mode. This only happens after the torrent client has been running for some time. I installed zfsonlinux from the offical stable ppa (https://launchpad.net/~zfs-native/+archive/stable). I previously tried running transmission-daemon from the Ubuntu repository, but since I've switched to building the latest transmission from source with the latest libevent (all stable) - same thing. I've seen bug reports (https://trac.transmissionbt.com/ticket/4147) for that issue, but none of them seem to have a solution. How can I fix these errors, or at least understand where they come from and what I can do to rectify the issue?

    Read the article

  • The incomplete list of impolite WP7 dev requests

    In my previous list of WP7 user requests, I piled all of my user hopes and dreams for my new WP7 phone (delivery date: who the hell knows) onto the universe as a way to make good things happen. And all that’s fine, but I’m not just a user; like most of my readers, I’m also a developer and have a need to control my phone with code. I have a long list of applications I want to write and an even longer list of applications I want other developers to write for me.   Today at 1:30p...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is it possible to get dragging work on a Macbook multi-touch touch pad

    - by lhahne
    I have a Macbook 5,1. That is to say that it is the only 13 inch aluminium Macbook as the later revisions were renamed Macbook Pro. Two-finger scrolling seems to work fine but dragging doesn't work. In OsX this works so that you point an object, click and keep your finger pressed on the touch pad while slide another finger to move the cursor. This causes weird and undefined behavior in Ubuntu as it seems the driver doesn't recognize this as dragging. Any ideas?

    Read the article

  • I cant add PPA repositories!

    - by Karthik Krishna
    For example, after running this command : sudo add-apt-repository ppa:tualatrix/ppa I get the following output : Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 125, in <module> ppa_info = get_ppa_info_from_lp(user, ppa_name) File "/usr/lib/python2.7/dist-packages/softwareproperties/ppa.py", line 80, in get_ppa_info_from_lp curl.perform() pycurl.error: (6, "Couldn't resolve host 'launchpad.net'") Why is it so. I just installed Ubuntu 12.04 LTS. And it works fine. I have updated and installed the system. I have even installed all required packages. But the thing is as soon as I want to install more packages, Like PPA and sort I am not able to. Till now I have not been able to install any PPA. I am working behind a proxy.

    Read the article

  • LTSP: flash plugin crashes after updates

    - by v4169sgr
    I am running an alternate install 64 bit Ubuntu 12.04 with ltsp-server-standalone. I recently ran a few updates, including adding wine from the repos, adding Medibuntu, and adding google earth, all on the server, which runs as a user workstation. Before the updates, the flash plugin ran fine in thin clients. However, it now crashes. No problem with running the flash plugin on the server / workstation. I have since followed the LTSPManual pdf to copy over my sources list, add Medibuntu in the chroot, update and upgrade, and then update the image, but the flash plugin still crashes. I'd like to get the flash plugin working again in my thin clients. How do I do this? Suggestions appreciated! More information in this thread: http://ubuntuforums.org/forumdisplay.php?f=336 Thanks!

    Read the article

  • Unity 3D hangs after power blackout

    - by Yusuf
    Unity 3D was working just fine till we had a blackout in the area (I have not yet bought a UPS :(). I tried the solutions here, but it does not help. I tried issuing a unity --reset and it gives me this output: http://pastebin.com/UW83w5cG What seems to be important (to me) from that log is this part: (compiz:6510): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed ERROR 2012-05-29 21:50:41 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported For the time being, I can use Unity-2D, but I would like the 3D to work as well. Can you please help? Edit: Output when issuing $ unity --replace: http://pastebin.com/sBCPbyrT (compiz:2811): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed ERROR 2012-06-01 20:21:24 unity.launcher.trashlaunchericon TrashLauncherIcon.cpp:62 Could not create file monitor for trash uri: Operation not supported Initializing unityshell options...done WARN 2012-06-01 20:21:25 unity.screeneffectframebufferobject ScreenEffectFramebufferObject.cpp:167 unsupported internal format Segmentation fault (core dumped)

    Read the article

  • setup WAN miniport(PPPOE) internet connection

    - by Ankit Shah
    Hello I'm from Ahmedabad, India. I want to setup my GTPL(i.e. the service provider) PPPOE internet connection in ubuntu 12.04 . please help..... I had tried to configure my internet connection in fedora 16 using the following method which worked fine :-- I've tried to go to Edit Connection then to the DSL tab then ADDenter my username and passwordSave. But this method didn't help in ubuntu 12.04. please help me out immediately....... Thank you in advance.

    Read the article

  • Storing large data in HTTP Session (Java Application)

    - by Umesh Awasthi
    I am asking this question in continuation with http-session-or-database-approach. I am planning to follow this approach. When user add product to cart, create a Cart Model, add items to cart and save to DB. Convert Cart model to cart data and save it to HTTP session. Any update/ edit update underlying cart in DB and update data snap shot in Session. When user click on view cart page, just pick cart data from Session and display to customer. I have following queries regarding HTTP Session How good is it to store large data (Shopping Cart) in Session? How scalable this approach can be ? (With respect to Session) Won't my application going to eat and demand a lot of memory? Is my approach is fine or do i need to consider other points while designing this? Though, we can control what all cart data should be stored in the Session, but still we need to have certain information in cart data being stored in session?

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

< Previous Page | 397 398 399 400 401 402 403 404 405 406 407 408  | Next Page >