Search Results

Search found 5015 results on 201 pages for 'compiler construction'.

Page 52/201 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • AsyncBridge? Async on .NET 4.0 using VS11

    - by Alex.Davies
    I've just found something quite cool. It's a code snippet that lets you use the real VS 11 C#5 compiler to write code that uses the async and await keywords, but to target .NET 4.0. It was published by Daniel Grunwald (from SharpDevelop).That means I can stop using the Async CTP for VS2010, which is not at all supported anymore, and a pain to install if you have windows updates turned on. Obviously I couldn't ask all my users to install .NET 4.5 beta, but .NET Demon is a VS 2010 extension, so we already have .NET 4.0. At the time of writing, VS11 is in beta still, but hopefully it's stable enough for my team to use!I would have written the code myself, but I had the wrong impression that the C# 5 beta compiler only looked in mscorlib for the helper classes it needs to implement async methods. Turns out you can provide them yourself. You can get the code here: https://gist.github.com/1961087You just add it to your project, and the compiler will apparently pick it up and use it to implement async/await. I'm at my parents' place for Easter without access to a machine with VS 11 to try it out. Let me know whether you get it to work!This reminds me of LINQBridge, which let us use C# 3 LINQ, but only require .NET 2. We should stick up a webpage to explain, with a nice easy dll, put it in nuget, and call it AsyncBridge.If you were really enthusiastic, you could re-implement the skeleton of the Task Parallel Library against .NET 2 to use async/await without even requiring .NET 4. Our usage stats suggest that practically everyone that uses Red Gate tools already has .NET 4 installed though, so I don't think I'll go to the effort.

    Read the article

  • On developing deep programming knowledge

    - by Robert Harvey
    Occasionally I see questions about edge cases and other weirdness on Stack Overflow that are easily answered by the likes of Jon Skeet and Eric Lippert, demonstrating a deep knowledge of the language and its many intricacies, like this one: You might think that in order to use a foreach loop, the collection you are iterating over must implement IEnumerable or IEnumerable<T>. But as it turns out, that is not actually a requirement. What is required is that the type of the collection must have a public method called GetEnumerator, and that must return some type that has a public property getter called Current and a public method MoveNext that returns a bool. If the compiler can determine that all of those requirements are met then the code is generated to use those methods. Only if those requirements are not met do we check to see if the object implements IEnumerable or IEnumerable<T>. That's cool stuff to know. I can understand why Eric knows this; he's on the compiler team, so he has to know. But what about those who demonstrate such deep knowledge who are not insiders? How do mere mortals (who are not on the C# compiler team) find out about stuff like this? Specifically, are there methods these folks use to systematically root out such knowledge, explore it and internalize it (make it their own)?

    Read the article

  • How to fix legacy code that uses <string.h> unsafely?

    - by Snowbody
    We've got a bunch of legacy code, written in straight C (some of which is K&R!), which for many, many years has been compiled using Visual C 6.0 (circa 1998) on an XP machine. We realize this is unsustainable, and we're trying to move it to a modern compiler. Political issues have said that the most recent compiler allowed is VC++ 2005. When compiling the project, there are many warnings about the unsafe string manipulation functions used (sprintf(), strcpy(), etc). Reviewing some of these places shows that the code is indeed unsafe; it does not check for buffer overflows. The compiler warning recommends that we move to using sprintf_s(), strcpy_s(), etc. However, these are Microsoft-created (and proprietary) functions and aren't available on (say) gcc (although we're primarily a Windows shop we do have some clients on various flavors of *NIX) How ought we to proceed? I don't want to roll our own string libraries. I only want to go over the code once. I'd rather not switch to C++ if we can help it.

    Read the article

  • Including configuration files while compiling a Flex application with MXMLC

    - by Daniel
    Hello there, I'm using: - Flex SDK 3.5.0 - Parsley 2.2.2. - Flash Builder 4 Down in my src folder (which is configured as part of the source path in the Flash Builder), I have a logging.xml which I configure via Parsley: FlexLoggingXmlSupport.initialize(); XmlContextBuilder.build("com/company/product/util/log/logging.xml"); When I run my application through Flash Builder, the XmlContentBuilder seems to locate the logging.xml (the implementation is a regular URLLoader one). When I compile my application using MXMLC (whether in Ant or command-line), and then run the swf, I get the following error: Cause(0): Error loading com/company/product/util/log/logging.xml: Error in URLLoader - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml - cause: Error #2032: Stream Error. URL: file:///C|/workspace/folder01/product/target/com/company/product/util/log/logging.xml Here is the MXMLC tag in Ant: <mxmlc file="${product.src.dir}/com/company/product/view/Main.mxml" output="${product.target.dir}/${product.release.filename}" keep-generated-actionscript="false"> <load-config filename="${FLEX_HOME}/frameworks/flex-config.xml" /> <!-- source paths --> <source-path path-element="${FLEX_HOME}/frameworks" /> <compiler.source-path path-element="${product.src.dir}" /> <compiler.source-path path-element="${product.locale.dir}/{locale}" /> <compiler.library-path dir="${product.basedir}" append="true"> <include name="libs" /> </compiler.library-path> <warnings>false</warnings> <debug>false</debug> </mxmlc> And here is the command line: \mxmlc.exe -output "C:\temp\Rap.swf" -load-config "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks\flex-config.xml" -source-path "C:\Program Files\Adobe\Adobe Flash Builder 4 Plug-in\sdks\3.5.0\frameworks" C:\workspace\folder01\product\src C:\workspace\folder01\product\locale\en_US -library-path+=C:\workspace\folder01\product\libs -file-specs C:\workspace\folder01\product\src\com\company\product\view\main.mxml Now perhaps I don't get this correctly, but as far as I understand the SWF should be compiled with all of the resources in the paths I give MXMLC as source-paths. For some reason it seems that the XML file is not compiled into the SWF, hence the relative path of the XmlContentBuilder isn't located successfully. I could not find any argument to provide the MXMLC with that might solve this. I tried using the -dump-config option with the Flash Builder's compiler, then giving that configuration to MXMLC, but it didn't work either. I tried providing the XmlContentBuilder with an absolute path. That worked fine when I compiled with MXMLC via Ant, but still didn't work when I used MXMLC in the command-line... I'd be happy to be enlightened here, regarding all subjects - using MXMLC, accessing resources with relative paths, configuring logging in Parsley, etc. Many thanks in advance, Daniel

    Read the article

  • compile AMR-nb codec with RVCT for WinCE/Window Mobile

    - by pps
    Hello everybody, I'm working on amr speech codec (porting/optimization) I have an arm (for WinCE) optimized version from voiceage and I use it as a reference in performance testing. So far, binary produced with my lib beats the other one by around 20-30%! I use Vs2008 and I have limited access to ARM instruction set I can generate with Microsoft compiler. So I tried to look for alternative compiler to see what would be performance difference. I have RVCT compiler, but it produces elf binaries/object files. However, I run my test on a wince mobile phone (TyTn 2) so I need to find a way to run code compiled with RVCT on WinCE. Some of the options are 1) to produce assembly listing (-S option of armcc), and try to assemble with some other assembler that can create COFF (MS assembler for arm) 2) compile and convert generated ELF object file to COFF object (seems like objcopy of gnu binutils could help me with that) 3) using fromelf utility supplied by RVCT create BIN file and somehow try to mangle the bits so I can execute them ;) My first attempt is to create a simple c++ file with one exported function, compile it with RVCT and then try to run that function on the smartphone. The emitted assembly cannot be assembled by the ms assembler (not only they are not compatible, but also ms assembler rejects some of the instructions generated with RVCT compiler; ASR opcode in my case) Then I tried to convert ELF object to coff format and I can't find any information on that. There is a gcc port for ce and objcopy from that toolset is supposed to be able to do the task. However, I can't get it working. I tried different switches, but I have no idea what exactly I need to specify as bfdname for input and output format. So, I couldn't get it working either. Dumping with fromelf and using generated bin file seems to be overkill, so I decided to ask you guys if there is anything I should try to do or maybe someone has already done similar task and could help me. Basically, all I want to do is to compile my code with RVCT compiler and see what's the performance difference. My code has zero dependencies on any c runtime functions. thanks!

    Read the article

  • Another C datatypes question

    - by b-gen-jack-o-neill
    Hello. Well, I completely get the most basic datatypes of C, like short, int, long, float, to be exact, all numerical types.These types are needed to be known perform right operations with right numbers. For example to use FPU to add two float numbers. So the compiler must know what the type is. But, when it comes to characters I am little bit off. I know that basic C datatype char is there for ASCII characters coding. But what I don´t know is, why you even need another datatype for characters. Why could not you just use 1 byte integer value to store ASCII character. If you call printf, you apecify the datatype in the call, so you could say to printf that the integer represents ASCII character. I dont know how cout resolves datatype, but I guess you could just specify it somehow. Another thing is, when you want to use Unicode, you must use datatype wchar. But, what if I would like to use some another, for example ISO, or Windows coding instead of UTF? Becouse wchar codes characters as UTF-16 or UTF-32 (I read its compiler specific). And, what if I would want to use for example some imaginary new 8 byte text coding? What datatype should I use for it? I am actually pretty confused of this, becouse I always expected that if I want to use UTF-32 instead of ASCII, I just tell compiler "get UTF-32 value of the character I typed and save it into 4 char field." I thought that text coding is to be dealt with by the end, print function for example. That I just need to specify the coding for the compiler to use, since Windows doesent use ASCII in win32 apps, I guess C compiler must convert the char I typed to ASCII from whatever the type is that windows sends to the C editor. And the last thing is, what if I want to use for example 25 Byte integer for some high math operations? C has no specify-yourself datatype. Yes, I know that this would be difficult since all the math operations would need to be changed, becouse CPU can not add 25 Bytes numbers together. But is there a way to do it? Or is there some math library for it? What if I want to compute Pi to 1000000000000000 digits? :) I know my question is pretty long, but I just wanted to explain my thoughts the best I can in English, since its not my native language it is difficult. And I believe there is simple answer to my question(s), something I missed that explains everything. I read lot about text coding, C tutorials, but nothing about his. Thank you for your time.

    Read the article

  • Envista: Coordinating Utilities with Oracle Spatial 11g

    - by stephen.garth
    It's annoying when the same streets seem to be perpetually dug up for utility construction or maintenance by your water or sewer department, electric utility, gas company or telephone company. Can't they do a better job of coordinating these activities? In this podcast, Marc Fagan, Executive VP of Product Management from Envista describes a Software-as-a-Service solution that Envista provides for utilities and public works departments to coordinate upcoming construction work, using Oracle Database 11g with Oracle Spatial. Each participating utility enters key data into the Web-based application, including when and where their work is to take place, and who to contact for more information. The data is then available on a common base map, enabling all participants to coordinate their activities, save money, and minimize inconvenience to their customers. Listen to the podcast Find out more about Oracle Spatial 11g var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • 'Make' command compiling errors

    - by G_T
    Im trying to locally install a program which is written in C++. I have downloaded the program and am attempting to use the "make" command to compile the program as the programs instructions dictate. However when I do I get this error: /usr/include/stdc-predef.h:30:26: fatal error: bits/predefs.h: No such file or directory compilation terminated. Looking around on the internet some people seem to address this problem by sudo apt-get install libc6-dev-i386 I checked to see if this package was installed and it was not. When I try to install it I get E: Unable to locate package libc6-dev-i386 I have already run sudo apt get update Im sure this is a rookie question but any help is appreciated, I'm running 13.10 32-bit. UPDATE: I've tried other suggestions I've found on similar error. All I have managed is a different but similar error. Here is what I get. Geoffrey@Geoffrey-Latitude-E6400:/usr/local/src/trinityrnaseq_r2013_08_14$ make Using gnu compiler for Inchworm and Chrysalis cd Inchworm && (test -e configure || autoreconf) \ && ./configure --prefix=`pwd` && make install checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for g++... g++ checking for C++ compiler default output file name... a.out checking whether the C++ compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for style of include used by make... GNU checking dependency style of g++... gcc3 checking for library containing cos... none required configure: creating ./config.status config.status: creating Makefile config.status: creating src/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands make[1]: Entering directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm' Making install in src make[2]: Entering directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm/src' if g++ -DHAVE_CONFIG_H -I. -I. -I.. -pedantic -fopenmp -Wall -Wextra -Wno-long-long -Wno-deprecated -m64 -g -O2 -MT Fasta_entry.o -MD -MP -MF ".deps/Fasta_entry.Tpo" -c -o Fasta_entry.o Fasta_entry.cpp; \ then mv -f ".deps/Fasta_entry.Tpo" ".deps/Fasta_entry.Po"; else rm -f ".deps/Fasta_entry.Tpo"; exit 1; fi In file included from Fasta_entry.hpp:4:0, from Fasta_entry.cpp:1: /usr/include/c++/4.8/string:38:28: fatal error: bits/c++config.h: No such file or directory #include <bits/c++config.h> ^ compilation terminated. make[2]: *** [Fasta_entry.o] Error 1 make[2]: Leaving directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm/src' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm' make: *** [inchworm] Error 2

    Read the article

  • Error on installing SVN extension with pecl

    - by thedp
    Hello, I'm trying to install the following PHP extension: http://php.net/manual/en/book.svn.php But when I do pecl install svn-beta I receive an error message that it can't locate the svn_client.h file. I searched the net but couldn't find any useful reference to this error. Thank you for your help. Installation result: root@myUbuntu:/home/thedp# pecl install svn-beta downloading svn-0.5.1.tgz ... Starting to download svn-0.5.1.tgz (23,563 bytes) .....done: 23,563 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 1. Please provide the prefix of Subversion installation : autodetect 1-1, 'all', 'abort', or Enter to continue: 1. Please provide the prefix of the APR installation used with Subversion : autodetect 1-1, 'all', 'abort', or Enter to continue: building in /var/tmp/pear-build-root/svn-0.5.1 running: /tmp/pear/temp/svn/configure --with-svn --with-svn-apr checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc and cc understand -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 checking for PHP extension directory... /usr/lib/php5/20060613+lfs checking for PHP installed headers prefix... /usr/include/php5 checking for re2c... no configure: WARNING: You will need re2c 0.12.0 or later if you want to regenerate PHP parsers. checking for gawk... no checking for nawk... nawk checking if nawk is broken... no checking for svn support... yes, shared checking for specifying the location of apr for svn... yes, shared checking for svn includes... configure: error: failed to find svn_client.h ERROR: `/tmp/pear/temp/svn/configure --with-svn --with-svn-apr' failed

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Creating my first F# program in my new &ldquo;Expert F# Book&rdquo;

    - by MarkPearl
    So I have a brief hour or so that I can dedicate today to reading my F# book. It’s a public holiday and my wife’s birthday and I have a ton of assignments for UNISA that I need to complete – but I just had to try something in F#. So I read chapter 1 – pretty much an introduction to the rest of the book – it looks good so far. Then I get to chapter 2, called “Getting Started with F# and .NET”. Great, there is a code sample on the first page of the chapter. So I open up VS2010 and create a new F# console project and type in the code which was meant to analyze a string for duplicate words… #light let wordCount text = let words = Split [' '] text let wordset = Set.ofList words let nWords = words.Length let nDups = words.Length - wordSet.Count (nWords, nDups) let showWordCount text = let nWords,nDups = wordCount text printfn "--> %d words in text" nWords printfn "--> %d duplicate words" nDups   So… bad start - VS does not like the “Split” method. It gives me an error message “The value constructor ‘Split’ is not defined”. It also doesn’t like wordSet.Count telling me that the “namespace or module ‘wordSet’ is not defined”. ??? So a bit of googling and it turns out that there was a bit of shuffling of libraries between the CTP of F# and the Beta 2 of F#. To have access to the Split function you need to download the F# PowerPack and hen reference it in your code… I download and install the powerpack and then add the reference to FSharp.Core and FSharp.PowerPack in my project. Still no luck! Some more googling and I get the suggestions I got were something like this…#r "FSharp.PowerPack.dll";; #r "FSharp.PowerPack.Compatibility.dll";; So I add the code above to the top of my Program.fs file and still no joy… I now get an error message saying… Error    1    #r directives may only occur in F# script files (extensions .fsx or .fsscript). Either move this code to a script file, add a '-r' compiler option for this reference or delimit the directive with '#if INTERACTIVE'/'#endif'. So what does that mean? If I put the code straight into the F# interactive it works – but I want to be able to use it in a project. The C# equivalent I would think would be the “Using” keyword. The #r doesn’t seem like it should be in the FSharp code. So I try what the compiler suggests by doing the following…#if INTERACTIVE #r "FSharp.PowerPack.dll";; #r "FSharp.PowerPack.Compatibility.dll";; #endif No luck, the Split method is still not recognized. So wait a second, it mentioned something about FSharp.PowerPack.Compatibility.dll – I haven’t added this as a reference to my project so I add it and remove the two lines of #r code. Partial success – the Split method is now recognized and not underlined, but wordSet.Count is still not working. I look at my code again and it was a case error – the original wordset was mistyped comapred to the wordSet. Some case correction and the compiler is no longer complaining. So the code now seems to work… listed below…#light let wordCount text = let words = String.split [' '] text let wordSet = Set.ofList words let nWords = words.Length let nDups = words.Length - wordSet.Count (nWords, nDups) let showWordCount text = let nWords,nDups = wordCount text printfn "--> %d words in text" nWords printfn "--> %d duplicate words" nDups  So recap – if I wanted to use the interactive compiler then I need to put the #r code. In my mind this is the equivalent of me adding the the references to my project. If however I want to use the powerpack in a project – I just need to make sure that the correct references are there. I feel like a noob once again!

    Read the article

  • Default Parameters vs Method Overloading

    - by João Angelo
    With default parameters introduced in C# 4.0 one might be tempted to abandon the old approach of providing method overloads to simulate default parameters. However, you must take in consideration that both techniques are not interchangeable since they show different behaviors in certain scenarios. For me the most relevant difference is that default parameters are a compile time feature while method overloading is a runtime feature. To illustrate these concepts let’s take a look at a complete, although a bit long, example. What you need to retain from the example is that static method Foo uses method overloading while static method Bar uses C# 4.0 default parameters. static void CreateCallerAssembly(string name) { // Caller class - Invokes Example.Foo() and Example.Bar() string callerCode = String.Concat( "using System;", "public class Caller", "{", " public void Print()", " {", " Console.WriteLine(Example.Foo());", " Console.WriteLine(Example.Bar());", " }", "}"); var parameters = new CompilerParameters(new[] { "system.dll", "Common.dll" }, name); new CSharpCodeProvider().CompileAssemblyFromSource(parameters, callerCode); } static void Main() { // Example class - Foo uses overloading while Bar uses C# 4.0 default parameters string exampleCode = String.Concat( "using System;", "public class Example", "{{", " public static string Foo() {{ return Foo(\"{0}\"); }}", " public static string Foo(string key) {{ return \"FOO-\" + key; }}", " public static string Bar(string key = \"{0}\") {{ return \"BAR-\" + key; }}", "}}"); var compiler = new CSharpCodeProvider(); var parameters = new CompilerParameters(new[] { "system.dll" }, "Common.dll"); // Build Common.dll with default value of "V1" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V1")); // Caller1 built against Common.dll that uses a default of "V1" CreateCallerAssembly("Caller1.dll"); // Rebuild Common.dll with default value of "V2" compiler.CompileAssemblyFromSource(parameters, String.Format(exampleCode, "V2")); // Caller2 built against Common.dll that uses a default of "V2" CreateCallerAssembly("Caller2.dll"); dynamic caller1 = Assembly.LoadFrom("Caller1.dll").CreateInstance("Caller"); dynamic caller2 = Assembly.LoadFrom("Caller2.dll").CreateInstance("Caller"); Console.WriteLine("Caller1.dll:"); caller1.Print(); Console.WriteLine("Caller2.dll:"); caller2.Print(); } And if you run this code you will get the following output: // Caller1.dll: // FOO-V2 // BAR-V1 // Caller2.dll: // FOO-V2 // BAR-V2 You see that even though Caller1.dll runs against the current Common.dll assembly where method Bar defines a default value of “V2″ the output show us the default value defined at the time Caller1.dll compiled against the first version of Common.dll. This happens because the compiler will copy the current default value to each method call, much in the same way a constant value (const keyword) is copied to a calling assembly and changes to it’s value will only be reflected if you rebuild the calling assembly again. The use of default parameters is also discouraged by Microsoft in public API’s as stated in (CA1026: Default parameters should not be used) code analysis rule.

    Read the article

  • Error on installing SVN extension with pecl

    - by thedp
    Hello, I'm trying to install the following PHP extension: http://php.net/manual/en/book.svn.php But when I do pecl install svn-beta I receive an error message that it can't locate the svn_client.h file. I searched the net but couldn't find any useful reference to this error. Thank you for your help. Installation result: root@myUbuntu:/home/thedp# pecl install svn-beta downloading svn-0.5.1.tgz ... Starting to download svn-0.5.1.tgz (23,563 bytes) .....done: 23,563 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 1. Please provide the prefix of Subversion installation : autodetect 1-1, 'all', 'abort', or Enter to continue: 1. Please provide the prefix of the APR installation used with Subversion : autodetect 1-1, 'all', 'abort', or Enter to continue: building in /var/tmp/pear-build-root/svn-0.5.1 running: /tmp/pear/temp/svn/configure --with-svn --with-svn-apr checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc and cc understand -c and -o together... yes checking for system library directory... lib checking if compiler supports -R... no checking if compiler supports -Wl,-rpath,... yes checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-pc-linux-gnu checking for PHP prefix... /usr checking for PHP includes... -I/usr/include/php5 -I/usr/include/php5/main -I/usr/include/php5/TSRM -I/usr/include/php5/Zend -I/usr/include/php5/ext -I/usr/include/php5/ext/date/lib -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 checking for PHP extension directory... /usr/lib/php5/20060613+lfs checking for PHP installed headers prefix... /usr/include/php5 checking for re2c... no configure: WARNING: You will need re2c 0.12.0 or later if you want to regenerate PHP parsers. checking for gawk... no checking for nawk... nawk checking if nawk is broken... no checking for svn support... yes, shared checking for specifying the location of apr for svn... yes, shared checking for svn includes... configure: error: failed to find svn_client.h ERROR: `/tmp/pear/temp/svn/configure --with-svn --with-svn-apr' failed

    Read the article

  • Most efficient Implementation a Tree in C++

    - by Topo
    I need to write a tree where each element may have any number of child elements, and because of this each branch of the tree may have any length. The tree is only going to receive elements at first and then it is going to use exclusively for iterating though it's branches in no specific order. The tree will have several million elements and must be fast but also memory efficient. My plan makes a node class to store the elements and the pointers to its children. When the tree is fully constructed, it would be transformed it to an array or something faster and if possible, loaded to the processor's cache. Construction and the search on the tree are two different problems. Can I focus on how to solve each problem on the best way individually? The construction of has to be as fast as possible but it can use memory as it pleases. Then the transformation into a format that give us speed when iterating the tree's branches. This should preferably be an array to avoid going back and forth from RAM to cache in each element of the tree. So the real question is which is the structure to implement a tree to maximize insert speed, how can I transform it to a structure that gives me the best speed and memory?

    Read the article

  • Maven error: Unable to get resource / Server redirected too many times

    - by tewe
    Our proxy went down and I tried to update dependencies with maven while it was off. Since then I can't download anything with maven. I get this error for everything. I tried -U option, deleting my local repository and tried different maven version (2.0.9, 2.2.1) but it doesn't work. Any idea how to solve this? Earlier it also said 'repository will be blacklisted' to all of them. Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-compiler-plugin/2.1/maven-compiler-plugin-2.1.pom [WARNING] Unable to get resource 'org.apache.maven.plugins:maven-compiler-plugin:pom:2.1' from repository central (http://repo1.maven.org/maven2): Error transferring file: Server redirected too many times (20) org.apache.maven.plugins:maven-compiler-plugin:pom:2.1 from the specified remote repositories: jboss-snapshot (http://snapshots.jboss.org/maven2), central (http://repo1.maven.org/maven2), JBoss Repo (http://repository.jboss.com/maven2), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.com/maven/bundles/external), com.springsource.repository.bundles.snapshot (http://repository.springsource.com/maven/bundles/snapshot), jboss (http://repository.jboss.com/maven2), com.springsource.repository.bundles.release (http://repository.springsource.com/maven/bundles/release), jboss-snapshot-plugins (http://snapshots.jboss.org/maven2), com.springsource.repository.bundles.milestone (http://repository.springsource.com/maven/bundles/milestone), jboss-plugins (http://repository.jboss.com/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 25 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 27 more

    Read the article

  • cpptask ordering of static libraries in gcc command line

    - by AC
    How do I force cpptask to move the static libraries to the end on arg list issued to the compiler? Here is the clause I am using <cpptasks:cc description="appname" subsystem="console" objdir="obj" outfile="dist/app_test"> <compiler refid="testsslcc" /> <linkerarg value="-L${libdir}" /> <linkerarg value="-L/usr/local/devl/lib" /> <linkerarg value="-Wl,-rpath,../lib" /> <libset libs="unittest ${libs} dsg readline ncurses gcov" /> <fileset dir="test/obj" includes="main.o" /> <fileset dir="." includes="${TCFILES}" /> <fileset dir="../lib" includes="libboost_thread.a libboost_date_time.a" /> </cpptasks:cc> when this executes, libboost_thread.a libboost_date_time.a are first files in the argument list passed the compiler, gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k ../../lib/libboost_date_time.a ../../lib/libboost_thread.a x.cpp ... which causes compiler error. By manually moving them to the end of the argument list, the application compiles without error. gcc -ggdb -Wl,-export-dynamic -Wshadow -Wno-format-y2k x.cpp ... ../../lib/libboost_date_time.a ../../lib/libboost_thread.a And yes I have tried changing the order in the xml, and that of course didn't work. For now I am using an exec task to call gcc with the files in the correct order but this of course is a hack.

    Read the article

  • Titanium won't run iPhone/Android Emulator

    - by BeOliveira
    I just installed Titanium SDK (1.5.1) and all the Android SDKs. Also, I already have iPhone SDK 4.2 installed. I downloaded KitchenSink and imported it into Titanium but whenever I try to run it on iPhone Emulator, I get this error: [INFO] One moment, building ... [INFO] Titanium SDK version: 1.5.1 [INFO] iPhone Device family: iphone [INFO] iPhone SDK version: 4.0 [INFO] Detected compiler plugin: ti.log/0.1 [INFO] Compiler plugin loaded and working for ios [INFO] Performing clean build [INFO] Compiling localization files [INFO] Detected custom font: comic_zine_ot.otf [ERROR] Error: Traceback (most recent call last): File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/builder.py", line 1003, in main execute_xcode("iphonesimulator%s" % iphone_version,["GCC_PREPROCESSOR_DEFINITIONS=LOG__ID=%s DEPLOYTYPE=development TI_DEVELOPMENT=1 DEBUG=1 TI_VERSION=%s" % (log_id,sdk_version)],False) File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/builder.py", line 925, in execute_xcode output = run.run(args,False,False,o) File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/run.py", line 31, in run sys.exit(rc) SystemExit: 1 And for Android, it runs the OS but not the KitchenSink app, here's the log: [INFO] Launching Android emulator...one moment [INFO] Building KitchenSink for Android ... one moment [INFO] plugin=/Library/Application Support/Titanium/plugins/ti.log/0.1/plugin.py [INFO] Detected compiler plugin: ti.log/0.1 [INFO] Compiler plugin loaded and working for android [INFO] Titanium SDK version: 1.5.1 (12/16/10 16:25 16bbb92) [INFO] Waiting for the Android Emulator to become available [ERROR] Timed out waiting for android.process.acore [INFO] Copying project resources.. [INFO] Detected tiapp.xml change, forcing full re-build... [INFO] Compiling Javascript Resources ... [INFO] Copying platform-specific files ... [INFO] Compiling localization files [INFO] Compiling Android Resources... This could take some time Any ideas on how to get Titanium to work?

    Read the article

  • Delegates in .NET: how are they constructed ?

    - by Saulius
    While inspecting delegates in C# and .NET in general, I noticed some interesting facts: Creating a delegate in C# creates a class derived from MulticastDelegate with a constructor: .method public hidebysig specialname rtspecialname instance void .ctor(object 'object', native int 'method') runtime managed { } Meaning that it expects the instance and a pointer to the method. Yet the syntax of constructing a delegate in C# suggests that it has a constructor new MyDelegate(int () target) where I can recognise int () as a function instance (int *target() would be a function pointer in C++). So obviously the C# compiler picks out the correct method from the method group defined by the function name and constructs the delegate. So the first question would be, where does the C# compiler (or Visual Studio, to be precise) pick this constructor signature from ? I did not notice any special attributes or something that would make a distinction. Is this some sort of compiler/visualstudio magic ? If not, is the T (args) target construction valid in C# ? I did not manage to get anything with it to compile, e.g.: int () target = MyMethod; is invalid, so is doing anything with MyMetod, e.g. calling .ToString() on it (well this does make some sense, since that is technically a method group, but I imagine it should be possible to explicitly pick out a method by casting, e.g. (int())MyFunction. So is all of this purely compiler magic ? Looking at the construction through reflector reveals yet another syntax: Func CS$1$0000 = new Func(null, (IntPtr) Foo); This is consistent with the disassembled constructor signature, yet this does not compile! One final interesting note is that the classes Delegate and MulticastDelegate have yet another sets of constructors: .method family hidebysig specialname rtspecialname instance void .ctor(class System.Type target, string 'method') cil managed Where does the transition from an instance and method pointer to a type and a string method name occur ? Can this be explained by the runtime managed keywords in the custom delegate constructor signature, i.e. does the runtime do it's job here ?

    Read the article

  • Covariance and Contravariance type inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • Covariance and Contravariance inference in C# 4.0

    - by devoured elysium
    When we define our interfaces in C# 4.0, we are allowed to mark each of the generic parameters as in or out. If we try to set a generic parameter as out and that'd lead to a problem, the compiler raises an error, not allowing us to do that. Question: If the compiler has ways of inferring what are valid uses for both covariance (out) and contravariance(in), why do we have to mark interfaces as such? Wouldn't it be enough to just let us define the interfaces as we always did, and when we tried to use them in our client code, raise an error if we tried to use them in an un-safe way? Example: interface MyInterface<out T> { T abracadabra(); } //works OK interface MyInterface2<in T> { T abracadabra(); } //compiler raises an error. //This makes me think that the compiler is cappable //of understanding what situations might generate //run-time problems and then prohibits them. Also, isn't it what Java does in the same situation? From what I recall, you just do something like IMyInterface<? extends whatever> myInterface; //covariance IMyInterface<? super whatever> myInterface2; //contravariance Or am I mixing things? Thanks

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • Nested Class member function can't access function of enclosing class. Why?

    - by Rahul
    Please see the example code below: class A { private: class B { public: foobar(); }; public: foo(); bar(); }; Within class A & B implementation: A::foo() { //do something } A::bar() { //some code foo(); //more code } A::B::foobar() { //some code foo(); //<<compiler doesn't like this } The compiler flags the call to foo() within the method foobar(). Earlier, I had foo() as private member function of class A but changed to public assuming that B's function can't see it. Of course, it didn't help. I am trying to re-use the functionality provided by A's method. Why doesn't the compiler allow this function call? As I see it, they are part of same enclosing class (A). I thought the accessibility issue for nested class meebers for enclosing class in C++ standards was resolved. How can I achieve what I am trying to do without re-writing the same method (foo()) for B, which keeping B nested within A? I am using VC++ compiler ver-9 (Visual Studio 2008). Thank you for your help.

    Read the article

  • How to use VC++ intrinsic functions w/o run-time library

    - by Adrian McCarthy
    I'm involved in one of those challenges where you try to produce the smallest possible binary, so I'm building my program without the C or C++ run-time libraries (RTL). I don't link to the DLL version or the static version. I don't even #include the header files. I have this working fine. For some code constructs, the compiler generates calls to memset(). For example: struct MyStruct { int foo; int bar; }; MyStruct blah = {}; // calls memset() Since I don't include the RTL, this results in a missing symbol at link time. I've been getting around this by avoiding those constructs. For the given example, I'll explicitly initialize the struct. MyStruct blah; blah.foo = 0; blah.bar = 0; But memset() can be useful, so I tried adding my own implementation. It works fine in Debug builds, even for those places where the compiler generates an implicit call to memset(). But in Release builds, I get an error saying that I cannot define an intrinsic function. You see, in Release builds, intrinsic functions are enabled, and memset() is an intrinsic. I would love to use the intrinsic for memset() in my release builds, since it's probably inlined and smaller and faster than my implementation. But I seem to be a in catch-22. If I don't define memset(), the linker complains that it's undefined. If I do define it, the compiler complains that I cannot define an intrinsic function. I've tried adding #pragma intrinsic(memset) with and without declarations of memset, but no luck. Does anyone know the right combination of definition, declaration, #pragma, and compiler and linker flags to get an intrinsic function without pulling in RTL overhead? Visual Studio 2008, x86, Windows XP+.

    Read the article

  • C++ Switch won't compile with externally defined variable used as case

    - by C Nielsen
    I'm writing C++ using the MinGW GNU compiler and the problem occurs when I try to use an externally defined integer variable as a case in a switch statement. I get the following compiler error: "case label does not reduce to an integer constant". Because I've defined the integer variable as extern I believe that it should compile, does anyone know what the problem may be? Below is an example: test.cpp #include <iostream> #include "x_def.h" int main() { std::cout << "Main Entered" << std::endl; switch(0) { case test_int: std::cout << "Case X" << std::endl; break; default: std::cout << "Case Default" << std::endl; break; } return 0; } x_def.h extern const int test_int; x_def.cpp const int test_int = 0; This code will compile correctly on Visual C++ 2008. Furthermore a Montanan friend of mine checked the ISO C++ standard and it appears that any const-integer expression should work. Is this possibly a compiler bug or have I missed something obvious? Here's my compiler version information: Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.5/specs Configured with: ../gcc-3.4.5-20060117-3/configure --with-gcc --with-gnu-ld --with-gnu-as --host=mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable-languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --enable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-java-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchronization --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.5 (mingw-vista special r3)

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >