Search Results

Search found 215 results on 9 pages for 'jit b'.

Page 2/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • CLR JIT Bugs Found During IKVM.NET Development

    "It is actually fairly common that people notice that things fail under retail but not debug and tend to blame code generation. While a code generation bug is possible, as a matter of statistics, it is not likely." -- Vance MorrisonDateCLRArchTypeDescription2010-06-12 v4 x64 Incorrect code Optimizer incorrectly propagates invariants.2010-06-04 v2, v4 x86 Crash ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Dalvik JIT

    [This post is by Dan Bornstein, virtual-machine wrangler. — Tim Bray] As the tech lead for the Dalvik team within the Android project, I spend my time working...

    Read the article

  • LLVM: Passing a pointer to a struct, which holds a pointer to a function, to a JIT function

    - by Rusky
    I have an LLVM (version 2.7) module with a function that takes a pointer to a struct. That struct contains a function pointer to a C++ function. The module function is going to be JIT-compiled, and I need to build that struct in C++ using the LLVM API. I can't seem get the pointer to the function as an LLVM value, let alone pass a pointer to the ConstantStruct that I can't build. I'm not sure if I'm even on the track, but this is what I have so far: void print(char*); vector<Constant*> functions; functions.push_back(ConstantExpr::getIntToPtr( ConstantInt::get(Type::getInt32Ty(context), (int)print), /* function pointer type here, FunctionType::get(...) doesn't seem to work */ )); ConstantStruct* struct = cast<ConstantStruct>(ConstantStruct::get( cast<StructType>(m->getTypeByName("printer")), functions )); Function* main = m->getFunction("main"); vector<GenericValue> args; args[0].PointerVal = /* not sure what goes here */ ee->runFunction(main, args);

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • Strengths and weaknesses of JIT compilers for Python

    - by Az
    Hi there, I'm currently aware of the following Python JIT compilers: Psyco, PyPy and Unladen Swallow. Basically, I'd like to ask for your personal experiences on the strengths and weaknesses of these compilers - and if there are any others worth looking into. Thanks in advance, Az

    Read the article

  • Very slow startup of ASP.NET applications

    - by Conrad
    I have an ASP.NET applications with quite small number of pages. The problem I see is that the startup time is quite slow. As far as I can tell, most of the time is spent in JIT. Pre-compiling the applications seem not very helpful in reducing the #methods JIT as reported thru PerfMon. Does anybody know what I can do to reduce the startup time further? Is it true that there is no way to pre-jit an ASP.NET application using NGEN?

    Read the article

  • java.util.EmptyStackException on JIT/Warmup

    - by infectedrhythms
    I'm using a 3rd party lib in my application that throws a java.util.EmptyStackException This only happens during the VM JIT/Warmup Start application Start stress test no rampup. java.util.EmptyStackException thrown Keep application and redo stress test. No exception thrown Shutdown application Start application Start stress test with rampup. No exception thrown I could keep reproducing this over and over. Anyone have any ideas on how I can trace this so I can give more info to the vendor? Or why it could even be happening? Thanks

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • Why do .NET developers offer 32-bit/64-bit versions of .NET assemblies?

    - by Tyler
    Evey now and then I see both x86 and x64 versions of a .NET assembly. Consider the following web part for SharePoint. Why wouldn't the developer just offer a single version and have let the JIT compiler sort out the rest? When I see these kinds offering is it just that the developer decided to create a native image using a tool like ngen in order to avoid a JIT? Someone please help me out here, I feel like I'm missing something of note. Updated From what I got below, both x86 and x64 builds are offered because one or more of the following reasons: The developer wanted to avoid JITing and created a native image of his code, targeting a given architecture using a tool like ngen.exe. The assembly contains platform specific COM calls and so it makes no point to build it as AnyCPU. In these cases builds that target different platforms may contain different code. The assembly may contain Win32 calls using pinvoke which won't get remapped by a JIT and so the build should target the platform it is bound to.

    Read the article

  • Melhoria de Performance no .NET 4.5: Multicore Just-in-Time (JIT).

    - by anobre
    Olá pessoal! Dando uma lida nas melhorias de performance da plataforma .NET 4.5, me deparei com algo extremamente interessante: Multicore Just-in-Time (JIT). A teoria é muito simples: por que não utilizar vários núcleos para a compilação JIT? Além disto, será que seria possível compilar os métodos em uma determinada ordem, onde os primeiros fossem aqueles com maior probabilidade de execução? Isto parece meio loucura mas é o que o Multicore Just-in-Time (JIT) faz. E o melhor de tudo, de uma forma extremamente simples. As aplicações ASP.NET 4.5 já o fazem por default. Em outras ocasiões, basta executar duas linhas de código: uma indicando a pasta onde o arquivo que armazenará o profile ficará, e a outra para iniciar o procedimento. Este profile é o arquivo responsável por armazenar a ordem de compilação dos métodos, para que aqueles com maior chance de serem executados mais cedo sejam compilados antes. Código para este processo: ProfileOptimization.SetProfileRoot(@"C:\ProfileRoot"); ProfileOptimization.StartProfile("profile"); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Esta otimização na compilação só será notada após a criação do profile. Portanto, na primeira vez nada será percebido. Ao final do processo, um arquivo com o nome escolhido (no caso profile) será criado, na pasta indicada como root: Fica a dica! Abraços!

    Read the article

  • Does the .NET CLR Really Optimize for the Current Processor

    - by dewald
    When I read about the performance of JITted languages like C# or Java, authors usually say that they should/could theoretically outperform many native-compiled applications. The theory being that native applications are usually just compiled for a processor family (like x86), so the compiler cannot make certain optimizations as they may not truly be optimizations on all processors. On the other hand, the CLR can make processor-specific optimizations during the JIT process. Does anyone know if Microsoft's (or Mono's) CLR actually performs processor-specific optimizations during the JIT process? If so, what kind of optimizations?

    Read the article

  • Iteration speed of int vs long

    - by jqno
    I have the following two programs: long startTime = System.currentTimeMillis(); for (int i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); and long startTime = System.currentTimeMillis(); for (long i = 0; i < N; i++); long endTime = System.currentTimeMillis(); System.out.println("Elapsed time: " + (endTime - startTime) + " msecs"); Note: the only difference is the type of the loop variable (int and long). When I run this, the first program consistently prints between 0 and 16 msecs, regardless of the value of N. The second takes a lot longer. For N == Integer.MAX_VALUE, it runs in about 1800 msecs on my machine. The run time appears to be more or less linear in N. So why is this? I suppose the JIT-compiler optimizes the int loop to death. And for good reason, because obviously it doesn't do anything. But why doesn't it do so for the long loop as well? A colleague thought we might be measuring the JIT compiler doing its work in the long loop, but since the run time seems to be linear in N, this probably isn't the case.

    Read the article

  • Javascript InfoVis Toolkit: How to specify source/sink for arcs?

    - by Rosarch
    I'm using JIT to render graphs. I'm using the RGraph feature. This JSON defines a graph: var json = [ { 'id': '1', 'name': 'CS 2110', 'adjacencies': ['0', '2'] }, { 'id': '1.5', 'name': 'INFO 2300', 'adjacencies': ['1'] }, { 'id': '0', 'name': 'CS 1110', 'adjacencies': ['1'] }, { 'id': '2', 'name': 'INFO 3300', 'adjacencies': ['1'] }, ] If I want a directed graph, how can I specify which nodes are sources and which are sinks?

    Read the article

  • CLR 4.0 inlining policy? (maybe bug with MethodImplOptions.NoInlining)

    - by ControlFlow
    I've testing some new CLR 4.0 behavior in method inlining (cross-assembly inlining) and found some strage results: Assembly ClassLib.dll: using System.Diagnostics; using System; using System.Reflection; using System.Security; using System.Runtime.CompilerServices; namespace ClassLib { public static class A { static readonly MethodInfo GetExecuting = typeof(Assembly).GetMethod("GetExecutingAssembly"); public static Assembly Foo(out StackTrace stack) // 13 bytes { // explicit call to GetExecutingAssembly() stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } public static Assembly Bar(out StackTrace stack) // 25 bytes { // reflection call to GetExecutingAssembly() stack = new StackTrace(); return (Assembly) GetExecuting.Invoke(null, null); } public static Assembly Baz(out StackTrace stack) // 9 bytes { stack = new StackTrace(); return null; } public static Assembly Bob(out StackTrace stack) // 13 bytes { // call of non-inlinable method! return SomeSecurityCriticalMethod(out stack); } [SecurityCritical, MethodImpl(MethodImplOptions.NoInlining)] static Assembly SomeSecurityCriticalMethod(out StackTrace stack) { stack = new StackTrace(); return Assembly.GetExecutingAssembly(); } } } Assembly ConsoleApp.exe using System; using ClassLib; using System.Diagnostics; class Program { static void Main() { Console.WriteLine("runtime: {0}", Environment.Version); StackTrace stack; Console.WriteLine("Foo: {0}\n{1}", A.Foo(out stack), stack); Console.WriteLine("Bar: {0}\n{1}", A.Bar(out stack), stack); Console.WriteLine("Baz: {0}\n{1}", A.Baz(out stack), stack); Console.WriteLine("Bob: {0}\n{1}", A.Bob(out stack), stack); } } Results: runtime: 4.0.30128.1 Foo: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Foo(StackTrace& stack) at Program.Main() Bar: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at ClassLib.A.Bar(StackTrace& stack) at Program.Main() Baz: at Program.Main() Bob: ClassLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null at Program.Main() So questions are: Why JIT does not inlined Foo and Bar calls as Baz does? They are lower than 32 bytes of IL and are good candidates for inlining. Why JIT inlined call of Bob and inner call of SomeSecurityCriticalMethod that is marked with the [MethodImpl(MethodImplOptions.NoInlining)] attribute? Why GetExecutingAssembly returns a valid assembly when is called by inlined Baz and SomeSecurityCriticalMethod methods? I've expect that it performs the stack walk to detect the executing assembly, but stack will contains only Program.Main() call and no methods of ClassLib assenbly, to ConsoleApp should be returned.

    Read the article

  • Does mprotect flush the instruction cache on ARM Linux?

    - by Adam Goode
    I am writing a JIT on ARM Linux that executes an instruction set that contains self-modifying code. The instruction set does not have any cache flush instructions (similar to x86 in that respect). If I write out some code to a page and then call mprotect on that page, is that sufficient to invalidate the instruction cache? Or do I also need to use the cacheflush syscall on those pages?

    Read the article

  • v8 is too slow for my purpose

    - by Scott
    I'm working on a music visualization plugin for libvisual. It's an AVS clone -- AVS being from Winamp. Right now I have a superscope plugin. This element has 4 scripts, and "point" is run at every pixel. You can imagine that it has to be rather fast. The original libvisual avs clone had a JIT compiler that was really fast, but it had some bugs and wasn't fully implemented, so I decided to try v8. Well, v8 is too slow running the compiled script at every pixel. Is there any other script engine that would be pretty fast for this purpose?

    Read the article

  • "Inlining" (kind of) functions at runtime in C

    - by fortran
    Hi, I was thinking about a typical problem that is very JIT-able, but hard to approach with raw C. The scenario is setting up a series of function pointers that are going to be "composed" (as in maths function composition) once at runtime and then called lots and lots of times. Doing it the obvious way involves many virtual calls, that are expensive, and if there are enough nested functions to fill the CPU branch prediction table completely, then the performance with drop considerably. In a language like Lisp, I could probably process the code and substitute the "virtual" call by the actual contents of the functions and then call compile to have an optimized version, but that seems very hacky and error prone to do in C, and using C is a requirement for this problem ;-) So, do you know if there's a standard, portable and safe way to achieve this in C? Cheers

    Read the article

  • Need some help deciphering a line of assembler code, from .NET JITted code

    - by Lasse V. Karlsen
    In a C# constructor, that ends up with a call to this(...), the actual call gets translated to this: 0000003d call dword ptr ds:[199B88E8h] What is the DS register contents here? I know it's the data-segment, but is this call through a VMT-table or similar? I doubt it though, since this(...) wouldn't be a call to a virtual method, just another constructor. I ask because the value at that location seems to be bad in some way, if I hit F11, trace into (Visual Studio 2008), on that call-instruction, the program crashes with an access violation. The code is deep inside a 3rd party control library, where, though I have the source code, I don't have the assemblies compiled with enough debug information that I can trace it through C# code, only through the disassembler, and then I have to match that back to the actual code. The C# code in question is this: public AxisRangeData(AxisRange range) : this(range, range.Axis) { } Reflector shows me this IL code: .maxstack 8 L_0000: ldarg.0 L_0001: ldarg.1 L_0002: ldarg.1 L_0003: callvirt instance class DevExpress.XtraCharts.AxisBase DevExpress.XtraCharts.AxisRange::get_Axis() L_0008: call instance void DevExpress.XtraCharts.Native.AxisRangeData::.ctor(class DevExpress.XtraCharts.ChartElement, class DevExpress.XtraCharts.AxisBase) L_000d: ret It's that last call there, to the other constructor of the same class, that fails. The debugger never surfaces inside the other method, it just crashes. The disassembly for the method after JITting is this: 00000000 push ebp 00000001 mov ebp,esp 00000003 sub esp,14h 00000006 mov dword ptr [ebp-4],ecx 00000009 mov dword ptr [ebp-8],edx 0000000c cmp dword ptr ds:[18890E24h],0 00000013 je 0000001A 00000015 call 61843511 0000001a mov eax,dword ptr [ebp-4] 0000001d mov dword ptr [ebp-0Ch],eax 00000020 mov eax,dword ptr [ebp-8] 00000023 mov dword ptr [ebp-10h],eax 00000026 mov ecx,dword ptr [ebp-8] 00000029 cmp dword ptr [ecx],ecx 0000002b call dword ptr ds:[1889D0DCh] // range.Axis 00000031 mov dword ptr [ebp-14h],eax 00000034 push dword ptr [ebp-14h] 00000037 mov edx,dword ptr [ebp-10h] 0000003a mov ecx,dword ptr [ebp-0Ch] 0000003d call dword ptr ds:[199B88E8h] // this(range, range.Axis)? 00000043 nop 00000044 mov esp,ebp 00000046 pop ebp 00000047 ret Basically what I'm asking is this: What the purpose of the ds:[ADDR] indirection here? VMT-table is only for virtual isn't it? and this is constructor Could the constructor have yet to be JITted, which could mean that the call would actually call through a JIT shim? I'm afraid I'm in deep water here, so anything might and could help. Edit: Well, the problem just got worse, or better, or whatever. We are developing the .NET feature in a C# project in a Visual Studio 2008 solution, and debugging and developing through Visual Studio. However, in the end, this code will be loaded into a .NET runtime hosted by a Win32 Delphi application. In order to facilitate easy experimentation of such features, we can also configure the Visual Studio project/solution/debugger to copy the produced dll's to the Delphi app's directory, and then execute the Delphi app, through the Visual Studio debugger. Turns out, the problem goes away if I run the program outside of the debugger, but during debugging, it crops up, every time. Not sure that helps, but since the code isn't slated for production release for another 6 months or so, then it takes some of the pressure off of it for the test release that we have soon. I'll dive into the memory parts later, but probably not until over the weekend, and post a followup.

    Read the article

  • llvm clang struct creating functions on the fly

    - by anon
    I'm using LLVM-clang on Linux. Suppose in foo.cpp I have: struct Foo { int x, y; }; How can I create a function "magic" such that: typedef (Foo) SomeFunc(Foo a, Foo b); SomeFunc func = magic("struct Foo { int x, y; };"); so that: func(SomeFunc a, SomeFunc b); // returns a.x + b.y; ? Note: So basically, "magic" needs to take a char*, have LLVM parse it to get how C++ lays out the struct, then create a function on the fly that returns a.x + b.y; Thanks!

    Read the article

  • When is a method eligible to be inlined by the CLR?

    - by Ani
    I've observed a lot of "stack-introspective" code in applications, which often implicitly rely on their containing methods not being inlined for their correctness. Such methods commonly involve calls to: MethodBase.GetCurrentMethod Assembly.GetCallingAssembly Assembly.GetExecutingAssembly Now, I find the information surrounding these methods to be very confusing. I've heard that the run-time will not inline a method that calls GetCurrentMethod, but I can't find any documentation to that effect. I've seen posts on StackOverflow on several occasions, such as this one, indicating the CLR does not inline cross-assembly calls, but the GetCallingAssembly documentation strongly indicates otherwise. There's also the much-maligned [MethodImpl(MethodImpOptions.NoInlining)], but I am unsure if the CLR considers this to be a "request" or a "command." Note that I am asking about inlining eligibility from the standpoint of contract, not about when current implementations of the JITter decline to consider methods because of implementation difficulties, or about when the JITter finally ends up choosing to inline an eligible method after assessing the trade-offs. I have read this and this, but they seem to be more focused on the last two points (there are passing mentions of MethodImpOptions.NoInlining and "exotic IL instructions", but these seem to be presented as heuristics rather than as obligations). When is the CLR allowed to inline?

    Read the article

  • fast on-demand c++ compilation [closed]

    - by Amit Prakash
    I'm looking at the possibility of building a system where when a query hits the server, we turn the query into c++ code, compile it as shared object and the run the code. The time for compilation itself needs to be small for it to be worthwhile. My code can generate the corresponding c++ code but if I have to write it out on disk and then invoke gcc to get a .so file and then run it, it does not seem to be worth it. Are there ways in which I can get a small snippet of code to compile and be ready as a share object fast (can have a significant start up time before the queries arrive). If such a tool has a permissive license thats a further plus. Edit: I have a very restrictive query language that the users can use so the security threat is not relevant. My own code translates the query into c++ code. The answer mentioning clang is perfect.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >