Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 115/307 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • scala 2.8.0.RC2 compiler problem on pattern matching statement?

    - by gruenewa
    Why does the following module not compile on Scala 2.8.RC[1,2]? object Test { import util.matching.Regex._ val pVoid = """\s*void\s*""".r val pVoidPtr = """\s*(const\s+)?void\s*\*\s*""".r val pCharPtr = """\s*(const\s+)GLchar\s*\*\s*""".r val pIntPtr = """\s*(const\s+)?GLint\s*\*\s*""".r val pUintPtr = """\s*(const\s+)?GLuint\s*\*\s*""".r val pFloatPtr = """\s*(const\s+)?GLfloat\s*\*\s*""".r val pDoublePtr = """\s*(const\s+)?GLdouble\s*\*\s*""".r val pShortPtr = """\s*(const\s+)?GLshort\s*\*\s*""".r val pUshortPtr = """\s*(const\s+)?GLushort\s*\*\s*""".r val pInt64Ptr = """\s*(const\s+)?GLint64\s*\*\s*""".r val pUint64Ptr = """\s*(const\s+)?GLuint64\s*\*\s*""".r def mapType(t: String): String = t.trim match { case pVoid() => "Unit" case pVoidPtr() => "ByteBuffer" case pCharPtr() => "CharBuffer" case pIntPtr() | pUintPtr() => "IntBuffer" case pFloatPtr() => "FloatBuffer" case pShortPtr() | pUshortPtr() => "ShortBuffer" case pDoublePtr() => "DoubleBuffer" case pInt64Ptr() | pUint64Ptr() => "LongBuffer" case x => x } }

    Read the article

  • Why does the compiler complain "while expected" when I try to add more code?

    - by user1893578
    Write a program with a word containing @ character as an input. If the word doesn't contain @, it should prompt the user for a word with @. Once a word with @ is read, it should output the word then terminate. This is what I have done so far: public class find { public static void main(String[] args) { System.out.println(" Please enter a word with @ "); Scanner scan = new Scanner(System.in); String bad = "@"; String word = scan.next(); do if (!word.contains(bad)) System.out.println(" Please try again "); else System.out.println(" " + word); while (!word.contains(bad)); } } I can get it to terminate after a word containing "@" is given as input, but if I try to add a Scanner to the line after "please try again", it says while expected.

    Read the article

  • overriding enumeration base type using pragma or code change

    - by vprajan
    Problem: I am using a big C/C++ code base which works on gcc & visual studio compilers where enum base type is by default 32-bit(integer type). This code also has lots of inline + embedded assembly which treats enum as integer type and enum data is used as 32-bit flags in many cases. When compiled this code with realview ARM RVCT 2.2 compiler, we started getting many issues since realview compiler decides enum base type automatically based on the value an enum is set to. http://www.keil.com/support/man/docs/armccref/armccref_Babjddhe.htm For example, Consider the below enum, enum Scale { TimesOne, //0 TimesTwo, //1 TimesFour, //2 TimesEight, //3 }; This enum is used as a 32-bit flag. but compiler optimizes it to unsigned char type for this enum. Using --enum_is_int compiler option is not a good solution for our case, since it converts all the enum's to 32-bit which will break interaction with any external code compiled without --enum_is_int. This is warning i found in RVCT compilers & Library guide, The --enum_is_int option is not recommended for general use and is not required for ISO-compatible source. Code compiled with this option is not compliant with the ABI for the ARM Architecture (base standard) [BSABI], and incorrect use might result in a failure at runtime. This option is not supported by the C++ libraries. Question How to convert all enum's base type (by hand-coded changes) to use 32-bit without affecting value ordering? enum Scale { TimesOne=0x00000000, TimesTwo, // 0x00000001 TimesFour, // 0x00000002 TimesEight, //0x00000003 }; I tried the above change. But compiler optimizes this also for our bad luck. :( There is some syntax in .NET like enum Scale: int Is this a ISO C++ standard and ARM compiler lacks it? There is no #pragma to control this enum in ARM RVCT 2.2 compiler. Is there any hidden pragma available ?

    Read the article

  • csc.exe not found error

    - by pokrate
    I have installed a fresh copy of windows xp 2002 with SP2, and then VS.net 2008 enterprise edition. I am trying to build a simplest possible web application, and its not compiling giving error csc.exe not found. I googled a lot, and spot the problem in the following section in web.config : <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> <compiler language="vb;vbs;visualbasic;vbscript" extension=".vb" warningLevel="4" type="Microsoft.VisualBasic.VBCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="OptionInfer" value="true"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> But if i remove the csharp compiler section , and then compile, it compiles fine with vb compiler section. And if I change the value from v3.5 to v2.0 in the of csharp section, then also it compiles fine. But then all my Linq Queries are not recognized by the compiler. But System.Linq and all classes present in it are accessible in the code. Please help in this weird behavior.

    Read the article

  • Flash builder 4 - change output filename using external build-config.xml (not Ant)

    - by Casey
    I'm trying to change the output filename with a config file loaded via the compiler option -load-config. It looks like this in my compiler arguments: -load-config+=build-config.xml. I've tried the following: <flex-config> <o>absolute/path/to/filename</o> </flex-config> and <flex-config> <output>absolute/path/to/filename</output> </flex-config> and <flex-config> <compiler> <o>absolute/path/to/filename</o> </compiler> </flex-config> and <flex-config> <compiler> <output>absolute/path/to/filename</output> </compiler> </flex-config> but none have worked. I'm on a PC using Flash Builder 4. Has anyone else done this? Also, ideally, I want to use a relative path instead of absolute. I can't get this to work either, even if I do so in the "additional compiler arguments" field of the Project configuration. Thanks in advance!

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • Regular expressions in c++ STL

    - by Radek Šimko
    Is there any native library in STL which is tested and works without any extra compiler options? I tried to use <regex>, but the compiler outputs this: In file included from /usr/include/c++/4.3/regex:40, from main.cpp:5: /usr/include/c++/4.3/c++0x_warning.h:36:2: error: #error This file requires compiler and library support for the upcoming ISO C++ standard, C++0x. This support is currently experimental, and must be enabled with the -std=c++0x or -std=gnu++0x compiler options.

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • VS 2008 Service Pack 1 problem

    - by Compiler
    Hi, My OPS is XP and service pack 3 installed.I cant install vs2008 service pack1,In log file i see 'Visual C++ 2008 SP1 Design-Time Components for x86 - KB947888' cant be installed. Error code is 1603.Last part of Installation file is here. Returning IDOK. INSTALLMESSAGE_ERROR [Error 1335. The cabinet file 'patch.cab' required for this installation is corrupt and cannot be used. This could indicate a network error, an error reading from the CD-ROM, or a problem with this package.] [1/12/2009, 10:14:50] (IronSpigot::MsiExternalUiHandler::UiHandler) Returning IDOK. INSTALLMESSAGE_ACTIONSTART [Action 10:14:50: Rollback. Rolling back action:] [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) Patch (C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VS90sp1-KB945140-X86-ENU.msp; C:\DOCUME~1\Cem\LOCALS~1\Temp\Microsoft Visual Studio 2008 SP1\VC90sp1-KB947888-x86-enu.msp) install failed on product (Microsoft Visual Studio 2008 Professional Edition - ENU). Msi Log: Microsoft Visual Studio 2008 SP1_20090112_100005671-Microsoft Visual Studio 2008 Professional Edition - ENU-MSP0.txt [1/12/2009, 10:17:29] (IronSpigot::MspInstallerT<class ATL::CStringT<unsigned short,class ATL::StrTraitATL<unsigned short,class ATL::ChTraitsCRT<unsigned short ::PerformMsiOperation) MsiApplyMultiplePatches returned 0x643

    Read the article

  • Is it possible to perform Google Website Optimization on URL Rewritten pages?

    - by digiguru
    I have a format of pages that I want to perform an A/B comparison on using google website optimizer. the URLs look as follows - the first page I want to compare... <mywebsite.com>/request1/([a-zA-Z0-9\-]*)_([0-9]+).htm vs <mywebsite.com>/request2/([a-zA-Z0-9\-]*)_([0-9]+).htm the goal page is <mywebsite.com>/request-sent.htm How can I set this up in google website optimizer? If it's not possible, are there alternative solutions available for doing such comparison reports online?

    Read the article

  • Checking if any of a list of values falls within a table of ranges

    - by Conspicuous Compiler
    I'm looking to check whether any of a list of integers fall in a list of ranges. The ranges are defined in a table defined something like: # Extra Type Field Default Null Key 0 int(11) rangeid 0 NO PRI 1 int(11) max 0 NO MUL 2 int(11) min 0 NO MUL Using MySQL 5.1 and Perl 5.10. I can check whether a single value, say 7, is in any of the ranges with a statement like SELECT 1 FROM range WHERE 7 BETWEEN min AND max If 7 is in any of those ranges, I get a single row back. If it isn't, no rows are returned. Now I have a list of, say, 50 of these values, not stored in a table at present. I assemble them using map: my $value_list = '(' . ( join ', ', map { int $_ } @values ) . ')' ; I want to see if any of the items in the list fall inside of any of the ranges, but am not particularly concerned with which number nor which range. I'd like to use a syntax such as: SELECT 1 FROM range WHERE (1, 2, 3, 4, 5, 6, 7, 42, 309, 10000) BETWEEN min AND max MySQL kindly chastises me for such syntax: Operand should contain 1 column(s) I pinged #mysql who were quite helpful. However, having already written this up by the time they responded and thinking it'd be helpful to fix the answer in a more permanent medium, I figured I'd post the question anyhow. Maybe SO will provide a different solution?

    Read the article

  • DB Interface Design Optimization: Is it better to optimise for Fewer requests of smaller data size?

    - by Overflow
    The prevailing wisdom in webservices/web requests in general is to design your api such that you use as few requests as possible, and that each request returns therefore as much data as is needed In database design, the accepted wisdom is to design your queries to minimise size over the network, as opposed to minimizing the number of queries. They are both remote calls, so what gives?

    Read the article

  • JQuery Optimization: Is there any way to speed up the rendering of the FlexSelect control?

    - by Sephrial
    Greetings, I am new to jQuery, and I have a performance problem with the FlexSelect control where it takes about 5 seconds to render the dropdown control (in the renderDropdown() function). The dropdown list contains about 5000 element. I believe all the runtime is attributed to the following block of code: var list = this.dropdownList.html(""); $.each(this.results, function() { list.append($("<li/>").html(this.name)); }); Question: Are there any alternatives that would build this list of elements in a more inefficient manner?

    Read the article

  • db optimization - have a total field or query table?

    - by Dorian Fife
    I have an app where users get points for actions they perform - either 1 point for an easy action or 2 for a difficult one. I wish to display to the user the total number of points he got in my app and the points obtained this week (since Monday at midnight). I have a table that records all actions, along with their time and number of points. I have two alternatives and I'm not sure which is better: Every time the user sees the report perform a query and sum the points the user got Add two fields to each user that records the number of points obtained so far (total and weekly). The weekly points value will be set to 0 every Monday at midnight. The first option is easier, but I'm afraid that as I'll get many users and actions queries will take a long time. The second option bares the risk of inconsistency between the table of actions and the summary values. I'm very interested in what you think is the best alternative here. Thanks, Dorian

    Read the article

  • Oracle Query Optimization: Why is My Second Query Faster?

    - by Patrick Cuff
    I was having some performance issues with an Oracle query, so I downloaded a trial of the Quest SQL Optimizer for Oracle, which made some changes that dramatically improved the query's performance. I'm not exactly sure why the recommended query had such an improvement; can anyone provide an explanation? Before: SELECT t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table1 t1, table2 t2, table3 t3 WHERE t1.id = t2.id AND t1.version_id = t2.version_id AND t2.id = 123 AND t1.version_id = t3.version_id AND t1.VERSION_NAME <> 'AA' order by t1.id Plan Cost: 831 Elapsed Time: 00:00:21.40 Number of Records: 40,717 After: SELECT /*+ USE_NL_WITH_INDEX(t1) */ t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table2 t2, table3 t3, table1 t1 WHERE t1.id = t2.id + 0 AND t1.version_id = t2.version_id + 0 AND t2.id = 123 AND t1.version_id = t3.version_id + 0 AND t1.VERSION_NAME || '' <> 'AA' AND t3.version_id = t2.version_id + 0 order by t1.id Plan Cost: 686 Elapsed Time: 00:00:00.95 Number of Records: 40,717 Questions: Why does re-arranging the order of the tables in the FROM clause help? Why does adding + 0 to the WHERE clause comparisons help? Why does || '' <> 'AA' in the WHERE clause VERSION_NAME comparison help? Is this a more efficient way of handling possible nulls on this column?

    Read the article

  • Very basic running of drools 5, basic setup and quickstart

    - by Berlin Brown
    Is there a more comprehensive quick start for drools 5. I was attempting to run the simple Hello World .drl rule but I wanted to do it through an ant script, possibly with just javac/java: I get the following error: Note: I don't am running completely without Eclipse or any other IDE: Is there a more comprehensive quick start for drools 5. I was attempting to run the simple Hello World .drl rule but I wanted to do it through an ant script, possibly with just javac/java: I get the following error: Note: I don't am running completely without Eclipse or any other IDE: test: [java] Exception in thread "main" org.drools.RuntimeDroolsException: Unable to load d ialect 'org.drools.rule.builder.dialect.java.JavaDialectConfiguration:java:org.drools.rule .builder.dialect.java.JavaDialectConfiguration' [java] at org.drools.compiler.PackageBuilderConfiguration.addDialect(PackageBuild erConfiguration.java:274) [java] at org.drools.compiler.PackageBuilderConfiguration.buildDialectConfigurati onMap(PackageBuilderConfiguration.java:259) [java] at org.drools.compiler.PackageBuilderConfiguration.init(PackageBuilderConf iguration.java:176) [java] at org.drools.compiler.PackageBuilderConfiguration.<init>(PackageBuilderCo nfiguration.java:153) [java] at org.drools.compiler.PackageBuilder.<init>(PackageBuilder.java:242) [java] at org.drools.compiler.PackageBuilder.<init>(PackageBuilder.java:142) [java] at org.drools.builder.impl.KnowledgeBuilderProviderImpl.newKnowledgeBuilde r(KnowledgeBuilderProviderImpl.java:29) [java] at org.drools.builder.KnowledgeBuilderFactory.newKnowledgeBuilder(Knowledg eBuilderFactory.java:29) [java] at org.berlin.rpg.rules.Rules.rules(Rules.java:33) [java] at org.berlin.rpg.rules.Rules.main(Rules.java:73) [java] Caused by: java.lang.RuntimeException: The Eclipse JDT Core jar is not in the classpath [java] at org.drools.rule.builder.dialect.java.JavaDialectConfiguration.setCompil er(JavaDialectConfiguration.java:94) [java] at org.drools.rule.builder.dialect.java.JavaDialectConfiguration.init(Java DialectConfiguration.java:55) [java] at org.drools.compiler.PackageBuilderConfiguration.addDialect(PackageBuild erConfiguration.java:270) [java] ... 9 more [java] Java Result: 1 ... ... I do include the following libraries with my javac and java target: <path id="classpath"> <pathelement location="${lib.dir}" /> <pathelement location="${lib.dir}/drools-api-5.0.1.jar" /> <pathelement location="${lib.dir}/drools-compiler-5.0.1.jar" /> <pathelement location="${lib.dir}/drools-core-5.0.1.jar" /> <pathelement location="${lib.dir}/janino-2.5.15.jar" /> </path> Here is the Java code that is throwing the error. I commented out the java.compiler code, that didn't work either. public void rules() { /* final Properties properties = new Properties(); properties.setProperty( "drools.dialect.java.compiler", "JANINO" ); PackageBuilderConfiguration cfg = new PackageBuilderConfiguration( properties ); JavaDialectConfiguration javaConf = (JavaDialectConfiguration) cfg.getDialectConfiguration( "java" ); */ final KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); // this will parse and compile in one step kbuilder.add(ResourceFactory.newClassPathResource("HelloWorld.drl", Rules.class), ResourceType.DRL); // Check the builder for errors if (kbuilder.hasErrors()) { System.out.println(kbuilder.getErrors().toString()); throw new RuntimeException("Unable to compile \"HelloWorld.drl\"."); } // Get the compiled packages (which are serializable) final Collection<KnowledgePackage> pkgs = kbuilder.getKnowledgePackages(); // Add the packages to a knowledgebase (deploy the knowledge packages). final KnowledgeBase kbase = KnowledgeBaseFactory.newKnowledgeBase(); kbase.addKnowledgePackages(pkgs); final StatefulKnowledgeSession ksession = kbase.newStatefulKnowledgeSession(); ksession.setGlobal("list", new ArrayList<Object>()); ksession.addEventListener(new DebugAgendaEventListener()); ksession.addEventListener(new DebugWorkingMemoryEventListener()); // Setup the audit logging KnowledgeRuntimeLogger logger = KnowledgeRuntimeLoggerFactory.newFileLogger(ksession, "log/helloworld"); final Message message = new Message(); message.setMessage("Hello World"); message.setStatus(Message.HELLO); ksession.insert(message); ksession.fireAllRules(); logger.close(); ksession.dispose(); } ... Here I don't think Ant is relevant because I have fork set to true: <target name="test" depends="compile"> <java classname="org.berlin.rpg.rules.Rules" fork="true"> <classpath refid="classpath.rt" /> <classpath> <pathelement location="${basedir}" /> <pathelement location="${build.classes.dir}" /> </classpath> </java> </target> The error is thrown at line 1. Basically, I haven't done anything except call final KnowledgeBuilder kbuilder = KnowledgeBuilderFactory.newKnowledgeBuilder(); I am running with Windows XP, Java6, and within Ant.1.7. The most recent (as of yesterday) version 5 of Drools-Rules.

    Read the article

  • Creating Python C module from Fortran sources on Ubuntu 10.04 LTS

    - by Botondus
    In a project I work on we use a Python C module compiled from Fortran with f2py. I've had no issues building it on Windows 7 32bit (using mingw32) and on the servers it's built on 32bit Linux. But I've recently installed Ubuntu 10.04 LTS 64bit on my laptop that I use for development, and when I build it I get a lot of warnings (even though I've apparently installed all gcc/fortran libraries/compilers), but it does finish the build. However when I try to use the built module in the application, most of it seems to run well but then it crashes with an error: * glibc detected /home/botondus/Envs/gasit/bin/python: free(): invalid next size (fast): 0x0000000006a44760 ** Warnings on running *f2py -c -m module_name ./fortran/source.f90* customize UnixCCompiler customize UnixCCompiler using build_ext customize GnuFCompiler Could not locate executable g77 Found executable /usr/bin/f77 gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize IntelFCompiler Could not locate executable ifort Could not locate executable ifc customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgf90 Could not locate executable pgf77 customize AbsoftFCompiler Could not locate executable f90 absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found absoft: no Fortran 90 compiler found customize NAGFCompiler Found executable /usr/bin/f95 customize VastFCompiler customize GnuFCompiler gnu: no Fortran 90 compiler found gnu: no Fortran 90 compiler found customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler Could not locate executable efort Could not locate executable efc customize IntelEM64TFCompiler customize Gnu95FCompiler Found executable /usr/bin/gfortran customize Gnu95FCompiler customize Gnu95FCompiler using build_ext I have tried building a 32bit version by installing the gfortran multilib packages and running f2py with -m32 option (but with no success): f2py -c -m module_name ./fortran/source.f90 --f77flags="-m32" --f90flags="-m32" Any suggestions on what I could try to either build 32bit version or correctly build the 64bit version? Edit: It looks like it crashes right at the end of a subroutine. The 'write' executes fine... which is strange. write(6,*)'Eh=',Eh end subroutine calcolo_involucro The full backtrace is very long and I'm not sure if it's any help, but here it is: *** glibc detected *** /home/botondus/Envs/gasit/bin/python: free(): invalid next size (fast): 0x0000000007884690 *** ======= Backtrace: ========= /lib/libc.so.6(+0x775b6)[0x7fe24f8f05b6] /lib/libc.so.6(cfree+0x73)[0x7fe24f8f6e53] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x4183c)[0x7fe24a18183c] /home/botondus/Envs/gasit/bin/python[0x46a50d] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x4fbd8)[0x7fe24a18fbd8] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x5aded)[0x7fe24a19aded] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x516e)[0x4a7c5e] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x5a60)[0x4a8550] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python[0x537620] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python[0x427dff] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python[0x477bff] /home/botondus/Envs/gasit/bin/python[0x46f47f] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4888)[0x4a7378] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python[0x537620] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_CallObjectWithKeywords+0x43)[0x4a1b03] /usr/local/lib/python2.6/dist-packages/numpy/core/multiarray.so(+0x2ee94)[0x7fe24a16ee94] /home/botondus/Envs/gasit/bin/python(_PyObject_Str+0x61)[0x454a81] /home/botondus/Envs/gasit/bin/python(PyObject_Str+0xa)[0x454b3a] /home/botondus/Envs/gasit/bin/python[0x461ad3] /home/botondus/Envs/gasit/bin/python[0x46f3b3] /home/botondus/Envs/gasit/bin/python(PyObject_Call+0x47)[0x41f0c7] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4888)[0x4a7378] /home/botondus/Envs/gasit/bin/python(PyEval_EvalCodeEx+0x911)[0x4a9671] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x4d19)[0x4a7809] /home/botondus/Envs/gasit/bin/python(PyEval_EvalFrameEx+0x5a60)[0x4a8550] ======= Memory map: ======== 00400000-0061c000 r-xp 00000000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0081b000-0081c000 r--p 0021b000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0081c000-0087e000 rw-p 0021c000 08:05 399145 /home/botondus/Envs/gasit/bin/python 0087e000-0088d000 rw-p 00000000 00:00 0 01877000-07a83000 rw-p 00000000 00:00 0 [heap] 7fe240000000-7fe240021000 rw-p 00000000 00:00 0 7fe240021000-7fe244000000 ---p 00000000 00:00 0 7fe247631000-7fe2476b1000 r-xp 00000000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2476b1000-7fe2478b1000 ---p 00080000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b1000-7fe2478b6000 r--p 00080000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b6000-7fe2478b7000 rw-p 00085000 08:03 140646 /usr/lib/libfreetype.so.6.3.22 7fe2478b7000-7fe2478bb000 r-xp 00000000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe2478bb000-7fe247aba000 ---p 00004000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247aba000-7fe247abb000 r--p 00003000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247abb000-7fe247abc000 rw-p 00004000 08:03 263882 /usr/lib/python2.6/dist-packages/PIL/_imagingft.so 7fe247abc000-7fe247abf000 r-xp 00000000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247abf000-7fe247cbf000 ---p 00003000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cbf000-7fe247cc0000 r--p 00003000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cc0000-7fe247cc1000 rw-p 00004000 08:03 266773 /usr/lib/python2.6/lib-dynload/_bytesio.so 7fe247cc1000-7fe247cc5000 r-xp 00000000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247cc5000-7fe247ec4000 ---p 00004000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec4000-7fe247ec5000 r--p 00003000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec5000-7fe247ec6000 rw-p 00004000 08:03 266786 /usr/lib/python2.6/lib-dynload/_fileio.so 7fe247ec6000-7fe24800c000 r-xp 00000000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe24800c000-7fe24820b000 ---p 00146000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe24820b000-7fe248213000 r--p 00145000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe248213000-7fe248215000 rw-p 0014d000 08:03 141358 /usr/lib/libxml2.so.2.7.6 7fe248215000-7fe248216000 rw-p 00000000 00:00 0 7fe248216000-7fe248229000 r-xp 00000000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248229000-7fe248428000 ---p 00013000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248428000-7fe248429000 r--p 00012000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe248429000-7fe24842a000 rw-p 00013000 08:03 140632 /usr/lib/libexslt.so.0.8.15 7fe24842a000-7fe248464000 r-xp 00000000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248464000-7fe248663000 ---p 0003a000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248663000-7fe248664000 r--p 00039000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248664000-7fe248665000 rw-p 0003a000 08:03 141360 /usr/lib/libxslt.so.1.1.26 7fe248665000-7fe24876e000 r-xp 00000000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24876e000-7fe24896d000 ---p 00109000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24896d000-7fe24896e000 r--p 00108000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe24896e000-7fe248999000 rw-p 00109000 08:03 534240 /usr/local/lib/python2.6/dist-packages/lxml/etree.so 7fe248999000-7fe2489a7000 rw-p 00000000 00:00 0 7fe2489a7000-7fe2489bd000 r-xp 00000000 08:03 132934 /lib/libgcc_s.so.1

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Are multiply-thrown Exceptions checked or runtime?

    - by froadie
    I have an Exception chain in which method1 throws an Exception to method2 which throws the Exception on to main. For some reason, the compiler forces me to deal with the error in method2 and marks it as an error if I don't, indicating that it's a checked Exception. But when the same Exception is thrown further down the line to main, the compiler allows me to ignore it and doesn't display any errors. The original Exception in method1 is a ParseException, which is checked. But the method has a generic throws Exception clause in the header, and the same object is thrown to method2, which has an identical throws Exception clause. When and how does this Exception lose the status of being checked / caught by the compiler? Edited to clarify: public void method1() throws Exception{ // code that may generate ParseException } public void method2() throws Exception{ method1(); //compiler error } public static void main(String[] args){ method2(); //ignored by compiler }

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >