Search Results

Search found 22104 results on 885 pages for 'programming language'.

Page 326/885 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • Career Shifters: How to compete with IT/ComSci graduates

    - by CareerShifter
    I am wondering what are the chances of a career shifter (mid 20's), who have maybe 3-6 months programming experience vs. younger fresh IT/Com Sci graduates. You see, even though I really love programming (Java/J2EE), but nobody gives me a feedback when I apply online. maybe because they preferred IT/ComSci graduates vs a career shifter like me.. So can you advice on how to improve my chance on being hired. How can i get a real-job programming experince if nobody is hiring me. I can make my own projects (working e-commerce site blah blah) but it is still different from the real job. And my codes are working but it still needs a lot of improvement and no one can tell me how to improve it because no one sees it (because I'm doing it alone?). Do you know any open source websites (java/j2ee/jee) / online home-based jobs who accepts java/j2ee/jee trainees.. Thank you very much

    Read the article

  • Avoid penalties for duplicate (multilanguage) shared hosting

    - by Dave
    My concern is about SEO. Now let me explain the scenario. I am making a 3 languages website. The development is alright, but I was targeting local customers with one domain, and international (english version) with another. Eg: Local http://www.minhalojadesapatos.com.br (this is not the real website, just example!) Other http://www.myshoesstore.com.br Both domain point to exactly the same hosting and content, but when user comes through local domain, default language is set to portuguese, otherwise, default is english. Language handling on backend uses PHP Sessions and cookies, so with just a click users can change content language. How to avoid being SEO-penalised in this context? (yeah, I was hungry when focusing market for choosing two domains but the activity really needs that, it is a travel agency).

    Read the article

  • Why did the team at LMAX use Java and design the architecture to avoid GC at all cost?

    - by kadaj
    Why did the team at LMAX design the LMAX Disruptor in Java but all their design points to minimizing GC use? If one does not want to have GC run then why use a garbage collected language? Their optimizations, the level of hardware knowledge and the thought they put are just awesome but why Java? I'm not against Java or anything, but why a GC language? Why not use something like D or any other language without GC but allows efficient code? Is it that the team is most familiar with Java or does Java possess some unique advantage that I am not seeing? Say they develop it using D with manual memory management, what would be the difference? They would have to think low level (which they already are), but they can squeeze the best performance out of the system as it's native.

    Read the article

  • I'm a beginner Java programmer but I want to be useful

    - by user105418
    Programming has always interested me, but after learning some of the basics of Java(I'm talking high school level), I don't really know what to do from there. I want to be able to apply what I learned in some way, whether it be a volunteer project or something, but I probably don't know enough programming. Is it possible for a novice Java programmer to be useful in some way whatsoever. I want to do this because I feel like I could learn more about programming by helping people in theirs, but I'm not sure if I'm even able to this though. Does anyone have any suggestions on how I can contribute to other people's project in some way or how to apply it in some way?

    Read the article

  • job offer in dead technology

    - by bold
    I have a job offer in a dead technology (specific programming language) that I don't want to work with nor do I believe it will offer many jobs in the future. It requires twice a year travels abroad, which not a plus in my eyes. On the other hand the money on the table is high. What would you do? edit: as its not clear I got a job in a programming language that is different from the academic programming language I worked with. Now I see it as a mistake to head to that direction.

    Read the article

  • Becoming an expert vs boredom [closed]

    - by QAH
    I am a college student, and I love to program, period. I code all kinds of things in different kinds of languages. Although I enjoy programming, I have an extremely hard time sticking to one project for a long time. I attribute this shortcoming to my high level of curiosity, exploring different technologies, languages, libraries, etc. What would be best? Should I settle down more and spend time on becoming an expert in one or two programming fields, or should I be more of a jack of all trades, trying out all kinds of new technologies, languages, programming methods, etc.? I'm guessing that somewhere in the middle would be best. I'm always amazed at how many developers are able to create one or two projects, and develop on them for years. What techniques do you guys employ to help you stay focused on a project?

    Read the article

  • Climbing the hacker ladder

    - by cobie
    This is not a question in which I am asking for opinions rather I am asking for first hand experience. I have been programming in python for quite a while and I feel solid enough in python programming. I can come up with algorithms for problems and implement them but I somehow feel I am stuck with remaining an apprentice. What are some first hand experiences on how to climb up the ladder and become better at programming as in learning about browsers security, compilers etc. Personal experiences would be valued in responses.

    Read the article

  • Should I continue to learn and program using ansi c or non standard? [closed]

    - by Erik
    I am a Cs Student, enthusiastic about programming. C is my first programming language that I learned. (Never been exposed to programming, data structures, algorithms before) I failed the exam because I studied on my own and didn't know how to use windows non standard libraries like conio.h. (I had to draw circles, etc using get x and get y). I told them but they just don't care. What do I do because for a beginner this is very confusing? Do I continue to study ansi c on my own, or should I study non standard c? Should I do them in parallel? *I did use the search bar but found nothing useful that could help me.

    Read the article

  • Usefull skills from a computer science degree

    - by Tom Squires
    I did my degree in physics and moved later into programming. I have two and a half years experience under my belt and like to think I write good code. I am, however, concerned that not doing a compsci degree has left holes in my knowledge. I would like to fill those up now since I know I want to be doing programming for the rest of my career. What skills/techniques did you learn in your compsci degree that one wouldn't pick up from on-the-job programming?

    Read the article

  • Why do we need private variables?

    - by rak
    Why do we need private variables in classes in the context of programming? Every book on programming I've read says this is a private variable, this is how you define it but stops there. The wording of these explanations always seemed to me like we really have a crisis of trust in our profession. The explanations always sounded like other programmers are out to mess up our code. Yet, there are many programming languages that do not have private variables. What do private variables help prevent? How do you decide if a particular of properties should be private or not? If by default every field SHOULD be private then why are there public data members in a class? Under what circumstances should a variable be made public?

    Read the article

  • What are steps in making an operating system in C ? [duplicate]

    - by ps06756
    This question already has an answer here: Compiler/OS Design - Where to start [closed] 3 answers I am trying to make an my own OS. This is for educational purpose only, so that I get to understand the internals as well as get a good idea of low level programming. I have some prior application development experience in C#/python/C++/C. I am a noob in assembly language(very less experience and knowledge). I understand that in writing an operating system,we can't go without assembly language. Currently, I have just printed a string in assembly language in the boot sector using qemu and BIOS interrupts. What I want is that, can someone specifically point out the steps that I need to follow to make my operating systems run C programs. So that, I can start writing my OS in C. Any other piece of advice to help a newbie, regarding the same is also welcome. Although, I have looked into many os development related tutorials/websites, I can't seem to find this information anywhere.

    Read the article

  • 27 vidéos techniques des Qt DevDays 2005, 2006 et 2008 sont désormais rendues publiques par Qt eLear

    L'équipe eLearning de Qt a depuis quelques temps cherché à récupérer des vidéos techniques issues des conférences des anciens QtDevDays dans l'optique de les faire partager à tout le monde. C'est aujourd'hui chose faite avec la publication en ligne de 27 présentations techniques ce qui correspond à 22h30min de vidéos. Les sujets traités sont toujours valides aujourd'hui, même si le framework a évolué au fil des années. 2005 :All About Qt Widgets Effective Graphics Programming Practical Model/View Programming Threaded Programming with Qt - Good Practise Writing Custom Styles with QStyle Writing plugin applications with Qt 2006 :Advanced Item Views...

    Read the article

  • The new Facebook Pages removed publication localization (translation)? [closed]

    - by Myka Eyl
    On the previous version of Facebook Pages I was able to specify in which language (or even countries) I wanted to publish a post. I've searched a LOT of time everywhere and I do not find anything like that since the timeline has landed on Facebook Pages. Do you know how I could publish something on a Facebook Page but only for ONE language? Because publishing things in several languages could get the "page fans" angry (too many new items that they could not read) and unlike the page. I would really appreciate how to find some language-specific options for new publications on a Facebook Page (I am admin of that page, of course). Any help will be greatly appreciated! Thanks you

    Read the article

  • Windows Phone App with 4SQ

    - by Nuttanon Pornpipak
    I'm want to create a my own Coffee shop app for semester's project. It's Windows Phone App. The App can i.e. view who is check-in here now , view menu , view photo by using 4SQ Endpoint APIs. And my problem is I don't know how to start it...which book i should read about C# and I don't know which knowledge (keyword) should i google it i.e. GET POST METHOD , JSON I ever used 4SQ Endpoint APIs once with javascript (jquery) $.ajax{(.....)} to get data from 4SQ Endpoint APIs So I googled and found JSON.NET Class but I don't know how to use it because i never programming in C# I'm just begin programming. I can programming in C only. Thank you Sorry for my bad grammar

    Read the article

  • For asp.net mvc is this a three tiered solution?

    - by bbb
    I am a asp.net mvc programmer and if I want to start a project I do this: I make a class library named Model for my models. I make a class library named Infrastructure.Repository for database processes I make a class library named Application for business logic layer And finally I make a MVC project for the UI. But now some things are confusing me. Am I using 3-tier programming? If yes so what is n-tier programming and which one is better? If no so what is 3-tier programming? Some where I see that the tiers namings are DAL and BIZ. Which one is correct according to the naming convention?

    Read the article

  • Which C# Book to take?

    - by Fischkopf
    I was searching for a book to learn C#, but now i'm kinda stuck. I found many people asking the same question, and many people gave answers, but there are so many books about C# that it is really hard to decide which one to take. Now i reduced my choice on two books, but I just can't decide between them. Namely, there are: Programming C# 4.0 and C# 4.0 In A Nutshell The first thing I want to know, are these good choices? I'm not completely new to programming, but I just didn't find the right language until know, but i think C# is the one I was searching for. I know all the bassic stuff from Delphi/Java/Python so I think i'm not a complete beginner in programming. Is there anyone out there that read both books and can cleary explain whats the difference between them? I haven't found many reviews and sort of, so I just don't know which one to chose. Or is there any book that is better suiting me?

    Read the article

  • PASS: Bylaw Change 2013

    - by Bill Graziano
    PASS launched a Global Growth Initiative in the Summer of 2011 with the appointment of three international Board advisors.  Since then we’ve thought and talked extensively about how we make PASS more relevant to our members outside the US and Canada.  We’ve collected much of that discussion in our Global Growth site.  You can find vision documents, plans, governance proposals, feedback sites, and transcripts of Twitter chats and town hall meetings.  We also address these plans at the Board Q&A during the 2012 Summit. One of the biggest changes coming out of this process is around how we elect Board members.  And that requires a change to the bylaws.  We published the proposed bylaw changes as a red-lined document so you can clearly see the changes.  Our goal in these bylaw changes was to address the changes required by the global growth initiatives, conduct a legal review of the document and address other minor issues in the document.  There are numerous small wording changes throughout the document.  For example, we replaced every reference of “The Corporation” with the word “PASS” so it now reads “PASS is organized…”. Board Composition The biggest change in these bylaw changes is how the Board is composed and elected.  This discussion starts in section VI.2.  This section now says that some elected directors will come from geographic regions.  I think this is the best way to make sure we give all of our members a voice in the leadership of the organization.  The key parts of this section are: The remaining Directors (i.e. the non-Officer Directors and non-Vendor Appointed Directors) shall be elected by the voting membership (“Elected Directors”). Elected Directors shall include representatives of defined PASS regions (“Regions”) as set forth below (“Regional Directors”) and at minimum one (1) additional Director-at-Large whose selection is not limited by region. Regional Directors shall include, but are not limited to, two (2) seats for the Region covering Canada and the United States of America. Additional Regions for the purpose of electing additional Regional Directors and additional Director-at-Large seats for the purpose of expanding the Board shall be defined by a majority vote of the current Board of Directors and must be established prior to the public call for nominations in the general election. Previously defined Regions and seats approved by the Board of Directors shall remain in effect and can only be modified by a 2/3 majority vote by the then current Board of Directors. Currently PASS has six At-Large Directors elected by the members.  These changes allow for a Regional Director position that is elected by the members but must come from a particular region.  It also stipulates that there must always be at least one Director-at-Large who can come from any region. We also understand that PASS is currently a very US-centric organization.  Our Summit is held in America, roughly half our chapters are in the US and Canada and most of the Board members over the last ten years have come from America.  We wanted to reflect that by making sure that our US and Canadian volunteers would continue to play a significant role by ensuring that two Regional seats are reserved specifically for Canada and the US. Other than that, the bylaws don’t create any specific regional seats.  These rules allow us to create Regional Director seats but don’t require it.  We haven’t fully discussed what the criteria will be in order for a region to have a seat designated for it or how many regions there will be.  In our discussions we’ve broadly discussed regions for United States and Canada Europe, Middle East, and Africa (EMEA) Australia, New Zealand and Asia (also known as Asia Pacific or APAC) Mexico, South America, and Central America (LATAM) As you can see, our thinking is that there will be a few large regions.  I’ve also considered a non-North America region that we can gradually split into the regions above as our membership grows in those areas.  The regions will be defined by a policy document that will be published prior to the elections. I’m hoping that over the next year we can begin to publish more of what we do as Board-approved policy documents. While the bylaws only require a single non-region specific At-large Director, I would expect we would always have two.  That way we can have one in each election.  I think it’s important that we always have one seat open that anyone who is eligible to run for the Board can contest.  The Board is required to have any regions defined prior to the start of the election process. Board Elections – Regional Seats We spent a lot of time discussing how the elections would work for these Regional Director seats.  Ultimately we decided that the simplest solution is that every PASS member should vote for every open seat.  Section VIII.3 reads: Candidates who are eligible (i.e. eligible to serve in such capacity subject to the criteria set forth herein or adopted by the Board of Directors) shall be designated to fill open Board seats in the following order of priority on the basis of total votes received: (i) full term Regional Director seats, (ii) full term Director-at-Large seats, (iii) not full term (vacated) Regional Director seats, (iv) not full term (vacated) Director-at-Large seats. For the purposes of clarity, because of eligibility requirements, it is contemplated that the candidates designated to the open Board seats may not receive more votes than certain other candidates who are not selected to the Board. We debated whether to have multiple ballots or one single ballot.  Multiple ballot elections get complicated quickly.  Let’s say we have a ballot for US/Canada and one for Region 2.  After that we’d need a mechanism to merge those two together and come up with the winner of the at-large seat or have another election for the at-large position.  We think the best way to do this is a single ballot and putting the highest vote getters into the most restrictive seats.  Let’s look at an example: There are seats open for Region 1, Region 2 and at-large.  The election results are as follows: Candidate A (eligible for Region 1) – 550 votes Candidate B (eligible for Region 1) – 525 votes Candidate C (eligible for Region 1) – 475 votes Candidate D (eligible for Region 2) – 125 votes Candidate E (eligible for Region 2) – 75 votes In this case, Candidate A is the winner for Region 1 and is assigned that seat.  Candidate D is the winner for Region 2 and is assigned that seat.  The at-large seat is filled by the high remaining vote getter which is Candidate B. The key point to understand is that we may have a situation where a person with a lower vote total is elected to a regional seat and a person with a higher vote total is excluded.  This will be true whether we had multiple ballots or a single ballot.  Board Elections – Vacant Seats The other change to the election process is for vacant Board seats.  The actual changes are sprinkled throughout the document. Previously we didn’t have a mechanism that allowed for an election of a Board seat that we knew would be vacant in the future.  The most common case is when a Board members moves to an Officer role in the middle of their term.  One of the key changes is to allow the number of votes members have to match the number of open seats.  This allows each voter to express their preference on all open seats.  This only applies when we know about the opening prior to the call for nominations.  This all means that if there’s a seat will be open at the start of the next Board term, and we know about it prior to the call for nominations, we can include that seat in the elections.  Ultimately, the aim is to have PASS members decide who sits on the Board in as many situations as possible. We discussed the option of changing the bylaws to just take next highest vote-getter in all other cases.  I think that’s wrong for the following reasons: All voters aren’t able to express an opinion on all candidates.  If there are five people running for three seats, you can only vote for three.  You have no way to express your preference between #4 and #5. Different candidates may have different information about the number of seats available.  A person may learn that a Board member plans to resign at the end of the year prior to that information being made public. They may understand that the top four vote getters will end up on the Board while the rest of the members believe there are only three openings.  This may affect someone’s decision to run.  I don’t think this creates a transparent, fair election. Board members may use their knowledge of the election results to decide whether to remain on the Board or not.  Admittedly this one is unlikely but I don’t want to create a situation where this accusation can be leveled. I think the majority of vacancies in the future will be handled through elections.  The bylaw section quoted above also indicates that partial term vacancies will be filled after the full term seats are filled. Removing Directors Section VI.7 on removing directors has always had a clause that allowed members to remove an elected director.  We also had a clause that allowed appointed directors to be removed.  We added a clause that allows the Board to remove for cause any director with a 2/3 majority vote.  The updated text reads: Any Director may be removed for cause by a 2/3 majority vote of the Board of Directors whenever in its judgment the best interests of PASS would be served thereby. Notwithstanding the foregoing, the authority of any Director to act as in an official capacity as a Director or Officer of PASS may be suspended by the Board of Directors for cause. Cause for suspension or removal of a Director shall include but not be limited to failure to meet any Board-approved performance expectations or the presence of a reason for suspension or dismissal as listed in Addendum B of these Bylaws. The first paragraph is updated and the second and third are unchanged (except cleaning up language).  If you scroll down and look at Addendum B of these bylaws you find the following: Cause for suspension or dismissal of a member of the Board of Directors may include: Inability to attend Board meetings on a regular basis. Inability or unwillingness to act in a capacity designated by the Board of Directors. Failure to fulfill the responsibilities of the office. Inability to represent the Region elected to represent Failure to act in a manner consistent with PASS's Bylaws and/or policies. Misrepresentation of responsibility and/or authority. Misrepresentation of PASS. Unresolved conflict of interests with Board responsibilities. Breach of confidentiality. The bold line about your inability to represent your region is what we added to the bylaws in this revision.  We also added a clause to section VII.3 allowing the Board to remove an officer.  That clause is much less restrictive.  It doesn’t require cause and only requires a simple majority. The Board of Directors may remove any Officer whenever in their judgment the best interests of PASS shall be served by such removal. Other There are numerous other small changes throughout the document. Proxy voting.  The laws around how members and Board members proxy votes are specific in Illinois law.  PASS is an Illinois corporation and is subject to Illinois laws.  We changed section IV.5 to come into compliance with those laws.  Specifically this says you can only vote through a proxy if you have a written proxy through your authorized attorney.  English language proficiency.  As we increase our global footprint we come across more members that aren’t native English speakers.  The business of PASS is conducted in English and it’s important that our Board members speak English.  If we get big enough to afford translators, we may be able to relax this but right now we need English language skills for effective Board members. Committees.  The language around committees in section IX is old and dated.  Our lawyers advised us to clean it up.  This section specifically applies to any committees that the Board may form outside of portfolios.  We removed the term limits, quorum and vacancies clause.  We don’t currently have any committees that this would apply to.  The Nominating Committee is covered elsewhere in the bylaws. Electronic Votes.  The change allows the Board to vote via email but the results must be unanimous.  This is to conform with Illinois state law. Immediate Past President.  There was no mechanism to fill the IPP role if an outgoing President chose not to participate.  We changed section VII.8 to allow the Board to invite any previous President to fill the role by majority vote. Nominations Committee.  We’ve opened the language to allow for the transparent election of the Nominations Committee as outlined by the 2011 Election Review Committee. Revocation of Charters. The language surrounding the revocation of charters for local groups was flagged by the lawyers. We have allowed for the local user group to make all necessary payment before considering returning of items to PASS if required. Bylaw notification. We’ve spent countless meetings working on these bylaws with the intent to not open them again any time in the near future. Should the bylaws be opened again, we have included a clause ensuring that the PASS membership is involved. I’m proud that the Board has remained committed to transparency and accountability to members. This clause will require that same level of commitment in the future even when all the current Board members have rolled off. I think that covers everything.  I’d encourage you to look through the red-line document and see the changes.  It’s helpful to look at the language that’s being removed and the language that’s being added.  I’m happy to answer any questions here or you can email them to [email protected].

    Read the article

  • Start Learning Ruby with IronRuby – Setting up the Environment

    - by kazimanzurrashid
    Recently I have decided to learn Ruby and for last few days I am playing with IronRuby. Learning a new thing is always been a fun and when it comes to adorable language like Ruby it becomes more entertaining. Like any other language, first we have to create the development environment. In order to run IronRuby we have to download the binaries form the IronRuby CodePlex project. IronRuby supports both .NET 2.0 and .NET 4, but .NET 4 is the recommended version, you can download either the installation or the zip file. If you download the zip file make sure you added the bin directory in the environment path variable. Once you are done, open up the command prompt and type : ir –v It should print message like: IronRuby 1.0.0.0 on .NET 4.0.30319.1 The ir is 32bit version of IronRuby, if you want to use 64bit you can try ir64. Next, we have to find a editor where we can write our Ruby code as there is currently no integration story of IronRuby with Visual Studio like its twin Iron Python. Among the free IDEs only SharpDevelop has the IronRuby support but it does not have auto complete or debugging built into it, only thing that it supports is the syntax highlighting, so using a text editor which has the same features is nothing different comparing to it. To play with the IronRuby we will be using Notepad++, which can be downloaded from its sourceforge download page. The Notepad++ does have a nice syntax highlighting support : I am using the Vibrant Ink with some little modification. The next thing we have to do is configure the Notepad++ that we can run the Ruby script in IronRuby inside the Notepad++. Lets create a batch(.bat) file in the IronRuby bin directory, which will have the following content: @echo off  cls call ir %1 pause This will make sure that the console will be paused once we run the script. Now click Run->Run in the Notepad++, it will bring up the run dialog and put the following command in the textbox: riir.bat "$(FULL_CURRENT_PATH)" Click the save which will bring another dialog. Type Iron Ruby and assign the shortcut to ctrl + f5 (Same as Visual Studio Start without Debugging) and click ok. Once you are done you will find the IronRuby in the Run menu. Now press ctrl + f5, we will find the ruby script running in the IronRuby. Now there are one last thing that we would like to add which is poor man’s context sensitive help. First, download the ruby language help file from the Ruby Installer site and extract into a directory. Next we will have to install the Language Help Plug-in of Notepad++, click Plugins->Plugin Manger –>Show Plugin Manager and scroll down until you find the plug-in the list, now check the plug-in and click install. Once it is installed it will prompt you to restart the Notepad++, click yes. When the Notepad++ restarts, click the Plugins –> Language Help –> Options –> add and enter the following details and click ok: The chm file location can be different depending upon where you extracted it. Now when you put your in any of ruby keyword and press ctrl + f1 it will take you to the help topic of that keyword. For example, when my caret is in the each of the following code and I press ctrl + f1, it will take me to the each api doc of Array. def loop_demo (1..10).each{ |n| puts n} end loop_demo That’s it for today. Happy Ruby coding.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • 24 hours to pass until 24 Hours of PASS

    - by Rob Farley
    There’s a bunch of stuff going on at the moment in the SQL world, so if you’ve missed this particular piece of news, let me tell you a bit about it. Twice a year, the SQL community puts on its biggest virtual event – 24 Hours of PASS. And the next one is tomorrow – March 21st, 2012. Twenty-four sessions, back-to-back, featuring a selection of some of the best presenters in the SQL world, speakers from all over the world, coming together in an online collaboration that so far has well over thirty thousand registrations across the presentations. Some people are signed up for all 24 sessions, some only one. Traditionally, LiveMeeting has been used as the platform for this event, but this year we’re going with a new platform – IBTalk. It promises big, and we’re hoping it won’t let us down. LiveMeeting has been great, and we thank Microsoft for providing it as a platform for the past few years. However, as the event has grown, we’ve found that a new idea is necessary. Last year a search was done for a new platform, and IBTalk ticked the right boxes. The feedback from the presenters and moderators so far has been overwhelmingly positive, and we’re hoping that this is going to really enhance the user experience. One of my favourite features of the platform is the language side. It provides a pretty good translation service. Users who join a session will see a flag on the left of the screen. If they click it, they can change the language to one of 15 on offer. Picking this changes all the labels on everything. It even translates the text in the Q&A window. What this means is that someone from Brazil can ask their question in Portuguese, and the presenter will see it in English. Then if the answer is typed in English, the questioner will be able to see the answer, also in Portuguese. Or they can switch to English to see it as the answerer typed it. I know there’s always the risk of bad translations going on, but I’ve heard good things about this translation service. But there’s more – IBTalk are providing staff to type up closed captioning live during the event. So if English isn’t your first language, don’t worry! Picking your language will also let you see subtitles in your chosen language. I’m hoping that this event is the start of PASS being able to reach people from all corners of the world. Wouldn’t it be great to find that this event is successful, and that the next 24HOP (later in the year, our Summit Preview event) has just as many non-English speakers tuning in as English speakers? If you haven’t been planning which sessions you’re going to attend, you really should get over to sqlpass.org/24hours and have a look through what’s on offer. There’s some amazing material from some of the industry’s brightest, covering a wide range of topics, from classic SQL areas to the brand new SQL 2012 features. There really should be something for every SQL professional. Check the time zones though – if you’re in the US you might be on Summer time, and an hour closer to GMT than normal. Massive thanks must go to Microsoft, SQL Sentry and Idera for sponsoring this event. Without sponsors we wouldn’t be able to put any of this on. These companies are helping 24HOP continue to grow into an event for the whole world. See you tomorrow! @rob_farley | #24hop | #sqlpass

    Read the article

  • 24 hours to pass until 24 Hours of PASS

    - by Rob Farley
    There’s a bunch of stuff going on at the moment in the SQL world, so if you’ve missed this particular piece of news, let me tell you a bit about it. Twice a year, the SQL community puts on its biggest virtual event – 24 Hours of PASS. And the next one is tomorrow – March 21st, 2012. Twenty-four sessions, back-to-back, featuring a selection of some of the best presenters in the SQL world, speakers from all over the world, coming together in an online collaboration that so far has well over thirty thousand registrations across the presentations. Some people are signed up for all 24 sessions, some only one. Traditionally, LiveMeeting has been used as the platform for this event, but this year we’re going with a new platform – IBTalk. It promises big, and we’re hoping it won’t let us down. LiveMeeting has been great, and we thank Microsoft for providing it as a platform for the past few years. However, as the event has grown, we’ve found that a new idea is necessary. Last year a search was done for a new platform, and IBTalk ticked the right boxes. The feedback from the presenters and moderators so far has been overwhelmingly positive, and we’re hoping that this is going to really enhance the user experience. One of my favourite features of the platform is the language side. It provides a pretty good translation service. Users who join a session will see a flag on the left of the screen. If they click it, they can change the language to one of 15 on offer. Picking this changes all the labels on everything. It even translates the text in the Q&A window. What this means is that someone from Brazil can ask their question in Portuguese, and the presenter will see it in English. Then if the answer is typed in English, the questioner will be able to see the answer, also in Portuguese. Or they can switch to English to see it as the answerer typed it. I know there’s always the risk of bad translations going on, but I’ve heard good things about this translation service. But there’s more – IBTalk are providing staff to type up closed captioning live during the event. So if English isn’t your first language, don’t worry! Picking your language will also let you see subtitles in your chosen language. I’m hoping that this event is the start of PASS being able to reach people from all corners of the world. Wouldn’t it be great to find that this event is successful, and that the next 24HOP (later in the year, our Summit Preview event) has just as many non-English speakers tuning in as English speakers? If you haven’t been planning which sessions you’re going to attend, you really should get over to sqlpass.org/24hours and have a look through what’s on offer. There’s some amazing material from some of the industry’s brightest, covering a wide range of topics, from classic SQL areas to the brand new SQL 2012 features. There really should be something for every SQL professional. Check the time zones though – if you’re in the US you might be on Summer time, and an hour closer to GMT than normal. Massive thanks must go to Microsoft, SQL Sentry and Idera for sponsoring this event. Without sponsors we wouldn’t be able to put any of this on. These companies are helping 24HOP continue to grow into an event for the whole world. See you tomorrow! @rob_farley | #24hop | #sqlpass

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • regarding the Windows Phone 7 series, XNA and Visual Basic

    - by Chris Williams
    as long as we're talking about VB... I figured I would share this as well. Hi everyone, I'm about to express a sentiment that might ruffle a few feathers, but I think most of you know me well enough to know I love like accept VB for what it is and that what I'm about to say is with good intentions. (The rest of you, who don't know me, please take my word for it.) The world is full of VB developers, I was one of them for a long time. I think it's safe to assume that none of us are ignorant people who require handholding. We're working professionals, making a living by using our skills as developers. I'm also willing to bet that quite a few of us are fluent in C# as well as VB. It may not be your preferred language, but many of you can do it and you prove that nearly every day. Honestly, I don't know ANY developers or consultants that have only known ONE language ever. So it pains me greatly when I see the word "CAN'T" being tossed around like a crutch... as in "we CAN'T develop for the windows phone or we CAN'T develop XNA games." At MIX, Microsoft hath decreed that C# is the language of choice for developing for the Windows Phone 7. I think it's a safe bet that you won't see VB support if it isn't there already. (Just like XNA... which is up to version 4.0 by now.)  So what? (Yeah... I said it.) I think everyone here can agree that actual coding is only one part of software design and development. There is nothing stopping ANY of you from beginning the process of designing your killer phone app, writing up specs, requirements, doing UI design, workflow, mockups, storyboards, art, etc.... None of these things are language dependent. IF by the time you've got that stuff out of the way, and there's still no VB support, then start doing some rapid prototyping of your app in C# (I know, I know... heresy!)  You still have to spend time learning how the phone does things, what UI tricks do what, what paradigms make sense, how to use to accelerometer and the tilt and the multitouch functionality. I can guarantee you that time spent doing this is a great investment, no matter WHAT extension your code files have. Eventually, you may have a working prototype. IF by this time, there's STILL no VB support... fret not, you've made significant progress on your app. You've designed it, prototyped it, figured out how to use the phone specific features... so you might as well finish it and pat yourself on the back for learning something new... and possibly being first to market with your new app. I'll be happy to argue any and all of these points online or off with anyone who cares to do so, but there is one undeniable point that you simply can't argue:  Your potential customers do not care AT ALL what programming language you used to write the app they are about to purchase. They care that it works. If your biggest concern is being first to market, than stop complaining and get busy because you're running out of time and the 3000+ people who were at MIX certainly aren't waiting for you. They've already started working on their apps.

    Read the article

  • Announcing Oracle Knowledge 8.5: Even Superheroes Need Upgrades

    - by Richard Lefebvre
    It’s no secret that we like Iron Man here at Oracle. We've certainly got stuff in common: one of the world’s largest technology companies and one of the world’s strongest technology-driven superheroes. If you've seen the recent Iron Man movies, you might have even noticed some of our servers sitting in Tony Stark’s lab. Heck, our CEO made a cameo appearance in one of the movies. Yeah, we’re fans. Especially as Iron Man is a regular guy with some amazing technology – like us. But Like all great things even Superheroes need upgrades, whether it’s their suit, their car or their spacestation. Oracle certainly has its share of advanced technology.  For example, Oracle acquired InQuira in 2011 after years of watching the company advance the science of Knowledge Management.  And it was some extremely super technology.  At that time, Forrester’s Kate Leggett wrote about it in ‘Standalone Knowledge Management Is Dead With Oracle's Announcement To Acquire InQuira’ saying ‘Knowledge, accessible via web self-service or agent UIs, is a critical customer service component for industries fielding repetitive questions about policies, procedures, products, and solutions.’  One short sentence that amounts to a very tall order.  Since the acquisition our KM scientists have been hard at work in their labs. Today Oracle announced its first major knowledge management release since its acquisition of InQuira: Oracle Knowledge 8.5. We’ve put a massively-upgraded supersuit on our KM solution because we still have bad guys to fight. And we are very proud to say that we went way beyond our original plans. So what, exactly, did we do in Oracle Knowledge 8.5? We did what any high-tech super-scientist would do. We made Oracle Knowledge smarter, stronger and faster. First, we gave Oracle Knowledge a stronger heart: Certified on Oracle technologies, including Oracle WebLogic Server, Oracle Business Intelligence, Oracle Exadata Database Machine and Oracle Exalogic Elastic Cloud. Huge scaling and performance improvements. Then we gave it a better reach: Improved iConnect functionality that delivers contextualized knowledge directly into CRM applications. Better content acquisition support across disparate sources. Enhanced Language Support including Natural Language search support for 16 Languages. Enhanced Keyword Search for 23 authoring languages, as well as enhanced out-of-the-box industry ontologies covering 14 languages. And finally we made Oracle Knowledge ridiculously smarter: Improved Natural Language Search and a new Contextual Answer Delivery that understands the true intent of each inquiry to deliver the best possible answers. AnswerFlow for Guided Navigation & Answer Delivery, a new application for guided troubleshooting and answer delivery. Knowledge Analytics standardized on Oracle’s Business Intelligence Enterprise Edition. Knowledge Analytics Dashboards optimized search and content creation through targeted, actionable insights. A new three-level language model "Global - Language - Locale" that provides an improved search experience for organizations with a global footprint. We believe that Oracle Knowledge 8.5 is the most sophisticated KM solution in existence today and we’ve worked very hard to help it fulfill the promise of KM: empowering customers and employees with deep insights wherever they need them. We hope you agree it’s a suit worth wearing. We are continuing to invest in Knowledge Management as it continues to be especially relevant today with the enterprise push for peer collaboration, crowd-sourced wisdom, agile innovation, social interaction channels, applied real-time analytics, and personalization. In fact, we believe that Knowledge Management is a critical part of the Customer Experience portfolio for success. From empowering employee’s, to empowering customers, to gaining the insights from interactions across all channels, businesses today cannot efficiently scale their efforts, strengthen their customer relationships or achieve their growth goals without a solid Knowledge Management foundation to build from. And like every good superhero saga, we’re not even close to being finished. Next we are taking Oracle Knowledge into the Cloud. Yes, we’re thinking what you’re thinking: ROCKET BOOTS! Stay tuned for the next adventure… By Nav Chakravarti, Vice-President, Product Management, CRM Knowledge and previously the CTO of InQuira, a knowledge management company acquired by Oracle in 2011

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >