Search Results

Search found 215 results on 9 pages for 'jit'.

Page 7/9 | < Previous Page | 3 4 5 6 7 8 9  | Next Page >

  • LLVM: bitcode with llvm-gcc (mingw) for windows

    - by TheShow
    Hi, i'm currently building a small JIT compiler. For the language I need a runtime library for some special math functions. I think the best would be to compile the lib to bitcode and link it. The compiler should be integrated in a product and as of this, it must work under windows (VC10, 64bit). So is it possible to build the math lib with the mingw llvm-gcc build an link it later with the JITed Code? Or are there any problems regarding the portability of the bitcode build with llvm-gcc under mingw? If there are problems, what solution would you suggest?

    Read the article

  • use ngen and bundle .NET dlls to run application in .NET-less machine?

    - by Camilo Martin
    AFAIK, ngen turns MSIL into native code (also reffered to as pre-JIT), however I never payed too much attention at it's startup performance impact. Ngen'd applications still require the .NET base class libraries (the runtime). Since the base class libraries have everything our .NET assemblies need (correct?) would it be possible to ship the framework's DLLs with my ngen'd application so that it does not require the runtime to be installed? (e.g., the scenario for most Windows XP machines) Oh, and please don't bother mentioning Remotesoft's Salamander Linker or Xenocode's Postbuild. They are not for my (and many's) current budget (and they seem to simply bundle the framework in a virtualized enviroinment, which means big download sizes and slow startup times I believe)

    Read the article

  • rails using jruby 1.5 - slow!!

    - by gucki
    Hi! I'm currently using passenger with ree 1.8.7 in production for a rails 2.3.5 project using postgresql as a database. ab -n 10000 -c 100: 285.69 [#/sec] (mean) I read jruby should be the fastest solution, so I installed jruby-1.5.0.rc2 together with jdbc postgres adapter and glassfish. As the performance is really poor, I also started running my application using "jruby --server -J-Druby.jit.threshold=0 script/server -e production". Anyway, I only get ab -n 10000 -c 100: 43.88 [#/sec] (mean) Thread_safe! is activated in my rails config. Java seems to use all cores, cpu usage is around 350% (top). ruby -v: jruby 1.5.0.RC2 (ruby 1.8.7 patchlevel 249) (2010-04-28 7c245f3) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_16) [amd64-java] I wonder what I'm doing wrong and how to get better performancre with jruby than with ree? Thanks, Corin

    Read the article

  • Is there an equivalent to Java's ClassFileTransformer in .NET? (a way to replace a class)

    - by Alix
    I've been searching for this for quite a while with no luck so far. Is there an equivalent to Java's ClassFileTransformer in .NET? Basically, I want to create a class CustomClassFileTransformer (which in Java would implement the interface ClassFileTransformer) that gets called whenever a class is loaded, and is allowed to tweak it and replace it with the tweaked version. I know there are frameworks that do similar things, but I was looking for something more straightforward, like implementing my own ClassFileTransformer. Is it possible? EDIT #1. More details about why I need this: Basically, I have a C# application and I need to monitor the instructions it wants to run in order to detect read or write operations to fields (operations Ldfld and Stfld) and insert some instructions before the read/write takes place. I know how to do this (except for the part where I need to be invoked to replace the class): for every method whose code I want to monitor, I must: Get the method's MethodBody using MethodBase.GetMethodBody() Transform it to byte array with MethodBody.GetILAsByteArray(). The byte[] it returns contains the bytecode. Analyse the bytecode as explained here, possibly inserting new instructions or deleting/modifying existing ones by changing the contents of the array. Create a new method and use the new bytecode to create its body, with MethodBuilder.CreateMethodBody(byte[] il, int count), where il is the array with the bytecode. I put all these tweaked methods in a new class and use the new class to replace the one that was originally going to be loaded. An alternative to replacing classes would be somehow getting notified whenever a method is invoked. Then I'd replace the call to that method with a call to my own tweaked method, which I would tweak only the first time is invoked and then I'd put it in a dictionary for future uses, to reduce overhead (for future calls I'll just look up the method and invoke it; I won't need to analyse the bytecode again). I'm currently investigating ways to do this and LinFu looks pretty interesting, but if there was something like a ClassFileTransformer it would be much simpler: I just rewrite the class, replace it, and let the code run without monitoring anything. An additional note: the classes may be sealed. I want to be able to replace any kind of class, I cannot impose restrictions on their attributes. EDIT #2. Why I need to do this at runtime. I need to monitor everything that is going on so that I can detect every access to data. This applies to the code of library classes as well. However, I cannot know in advance which classes are going to be used, and even if I knew every possible class that may get loaded it would be a huge performance hit to tweak all of them instead of waiting to see whether they actually get invoked or not. POSSIBLE (BUT PRETTY HARDCORE) SOLUTION. In case anyone is interested (and I see the question has been faved, so I guess someone is), this is what I'm looking at right now. Basically I'd have to implement the profiling API and I'll register for the events that I'm interested in, in my case whenever a JIT compilation starts. An extract of the blogpost: In your ICorProfilerCallback2::ModuleLoadFinished callback, you call ICorProfilerInfo2::GetModuleMetadata to get a pointer to a metadata interface on that module. QI for the metadata interface you want. Search MSDN for "IMetaDataImport", and grope through the table of contents to find topics on the metadata interfaces. Once you're in metadata-land, you have access to all the types in the module, including their fields and function prototypes. You may need to parse metadata signatures and this signature parser may be of use to you. In your ICorProfilerCallback2::JITCompilationStarted callback, you may use ICorProfilerInfo2::GetILFunctionBody to inspect the original IL, and ICorProfilerInfo2::GetILFunctionBodyAllocator and then ICorProfilerInfo2::SetILFunctionBody to replace that IL with your own. The great news: I get notified when a JIT compilation starts and I can replace the bytecode right there, without having to worry about replacing the class, etc. The not-so-great news: you cannot invoke managed code from the API's callback methods, which makes sense but means I'm on my own parsing the IL code, etc, as opposed to be able to use Cecil, which would've been a breeze. I don't think there's a simpler way to do this without using AOP frameworks (such as PostSharp). If anyone has any other idea please let me know. I'm not marking the question as answered yet.

    Read the article

  • DragDrop registration did not succeed in Setup Project

    - by rodnower
    Hello, we have some installation project in Visual Studio solution (Other project types - Setup and deployment - Setup project). This project have other library type project with Installation class named InstallationCore like project output. In user action I call to Install and Uninstall functions of installer of InstallationCore. InstallationCore has windows forms for interaction with user. There, in forms, I use Drag and Drop functionality for Drag and Drop text from Tree View to Text Box. But in line: txbUserName.AllowDrop = true; I get error of JIT debugger: Unhandled exception has occured DragDrop registration did not succeed System.InvalidOperationException: DragDrop registration did not succeed And long stack trace after that. Important to say, that when I run Installer function from test project the error did not occur and all work fine. Error occurs only when I run the .msi package. Any suggestions? Thank you for ahead.

    Read the article

  • How to take a screenshot with Mono C#?

    - by vagabond
    I'm trying to use use code get a screenshot in Mono C# but I'm getting a System.NotImplementedException when I call CopyFromScreen. My code works with .NET, so is there an alternate way of getting a screenshot using Mono? Bitmap bitmap = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height); Graphics graphics = Graphics.FromImage(bitmap as Image); graphics.CopyFromScreen(0, 0, 0, 0, bitmap.Size); System.IO.MemoryStream memoryStream = new System.IO.MemoryStream(); bitmap.Save(memoryStream, imageFormat); bitmap.Save(@"\tmp\screenshot.png", ImageFormat.Png); I am using Mono JIT compiler version 2.4.2.3 (Debian 2.4.2.3+dfsg-2)

    Read the article

  • Auto-generate WebControls

    - by Adrian K
    I want to generate .net code from a template so that it more rapid, so lazy developers (and I mean that in the nicest possible way!) don't have to write them in the IDE, compile them, etc... I know I can roll my own tool which generates the code using reflection (by reading in some text file, etc), but I just wondered if there was an easier than starting from scratch since this is what ASP.NET basically does already; so is there anyway to leverage this? E.g. to quote Peter A. Bromberg: Even an ASPX page with no code on it gets turned into an instance of the System.Web.UI.Page class. The page is parsed by the ASP.NET engine when it is first requested, and then its JIT compiled version is cached in the Temporary ASP.NET Files folder as long as the application is running and the .aspx page hasn't been changed. Ideally I want to auto-generate WebControls, but examples of anything closely related will do. C# Examples preferred also, but anything considered :)

    Read the article

  • Reasons to learn MSIL

    - by mannu
    Hi, Learning MSIL is fun and all that. Understanding what is going on "under the hood" can in many ways improve how you write your code performance-wise. However, the IL that is produced by the compiler is quite verbose and does not tell the whole story since JIT will optimize away a lot of the code. I, personally, have had good use of my very basic IL understanding when I've had to make a small fix in an assembly I do not have the source code for. But, I could as well have used Reflector to generate C# code. I would like to know if you've ever had good use of MSIL understanding and/or why you think it is worth learning it (except for the fun in it, of course). I'd also like to know if you think one should not learn it and why.

    Read the article

  • A compiler for automata theory

    - by saadtaame
    I'm designing a programming language for automata theory. My goal is to allow programmers to use machines (DFA, NFA, etc...) as units in expressions. I'm confused whether the language should be compiled, interpreted, or jit-compiled! My intuition is that compilation is a good choice, for some operations might take too much time (converting NFA's to equivalent DFA's can be expensive). Translating to x86 seems good. There is one issue however: I want the user to be able to plot machines. Any ideas?

    Read the article

  • Just in time debugger is popping up when debugging service client

    - by MAHESH K .M
    I have created as WCF service and hosted in IIS . This service is calling from another web application. When I am trying to debug the service calling event in the web application, the Just-in-Time debugger is popping up. It is running fine in the normal mode. Only in the debug mode the service is not running and JIT debugger is popping up also Why it happens? How to debug the service call in this scenario? The service is in .NET 4 and the web application is in .net 1.1 Thanks in advance Mahesh.

    Read the article

  • d3 tree - parents having same children

    - by Larry Anderson
    I've been transitioning my code from JIT to D3, and working with the tree layout. I've replicated http://mbostock.github.com/d3/talk/20111018/tree.html with my tree data, but I wanted to do a little more. In my case I will need to create child nodes that merge back to form a parent at a lower level, which I realize is more of a directed graph structure, but would like the tree to accomodate (i.e. notice that common id's between child nodes should merge). So basically a tree that divides like normal on the way from parents to children, but then also has the ability to bring those children nodes together to be parents (sort of an incestual relationship or something :)). Asks something similar - How to layout a non-tree hierarchy with D3 It sounds like I might be able to use hierarchical edge bundling in conjunction with the tree hierarchy layout, but I haven't seen that done. I might be a little off with that though.

    Read the article

  • Ways to optimize Android App code based on function call stack?

    - by K-RAN
    I've been told that Android OS stores all function calls in a stack. This can lead to many problems and cause the 'hiccups' during runtime, even if a program is functionalized properly, correct? So the question is, how can we prevent this from happening? The obvious solution is to functionalize less, along with other sensible acts such as refraining from excessively/needlessly creating objects, performing static calls to functions that don't access fields, etc... Is there another way though? Or can this only be done through careful code writing on the programmers' part? Does the JVM/JIT automatically optimize the bytecode during compile time to account for this?? Thanks a lot for your responses!!

    Read the article

  • Ngen or compile to native code is better

    - by Raghav55
    I want to know which is one better native code generated is NGen.exe is better or run time conversion of IL to native code by JIT ? using System; public class Vehicle { public Vehicle() { } public string Name { get; set; } public string Model { get; set; } } class Program { static void Main(string[] args) { Vehicle aVech = new Vehicle(); aVech.Name = "BUS"; aVech.Model = "1980"; } }

    Read the article

  • Best way to use Cradle with Express.js (CouchDB, Node.js)

    - by Costa
    I'm building my website ( http://tedxgramercy.jit.su ) with express.js and so far I've been using the http.request method in node to access couch, and that's been cool. I've learned lots about how http, couch, and node work, which is awesome. Anyways, I'm thinking of moving over to cradle now (Let me know if you have a strong opinion about this) and I'd like to know the "right" way to set this up. Should I... require() cradle and make a new connection to my db in each separate route? create my database connection once, and then just pass that connection by require()ing the connection in each route? (if so, how do I do that?) Thanks!!!

    Read the article

  • Cross-platform independent development

    - by Joe Wreschnig
    Some years ago, if you wrote in C and some subset of C++ and used a sufficient number of platform abstractions (via SDL or whatever), you could run on every platform an indie could get on - Linux, Windows, Mac OS of various versions, obscure stuff like BeOS, and the open consoles like the GP2X and post-death Dreamcast. If you got a contract for a closed platform at some point, you could port your game to that platform with "minimal" code changes as well. Today, indie developers must use XNA to get on the Xbox 360 (and upcoming Windows phone); must not use XNA to work anywhere else but Windows; until recently had to use Java on Android; Flash doesn't run on phones, HTML5 doesn't work on IE. Unlike e.g. DirectX vs. OpenGL or Windows vs. Unix, these are changes to the core language you write your code in and can't be papered over without, basically, writing a compiler. You can move some game logic into scripts and include an interpreter - except when you can't, because the iPhone SDK doesn't allow it, and performance suffers because no one allows JIT. So what can you do if you want a really cross-platform portable game, or even just a significant body of engine and logic code? Is this not a problem because the platforms have fundamentally diverged - it's just plain not worthwhile to try to target both an iPhone and the Xbox 360 with any shared code because such a game would be bad? (I find this very unlikely. I can easily see wanting to share a game between a Windows Mobile phone and an Android, or an Xbox 360 and an iPad.) Are interfaces so high-level now that porting time is negligible? (I might believe this for business applications, but not for games with strict performance requirements.) Is this going to become more pronounced in the future? Is the split going to be, somewhat scarily, still down vendor lines? Will we all rely on high-level middleware like Flash or Unity to get anything cross-platform done? tl;dr - Is porting a problem, is it going to be a bigger problem in the future, and if so how do we solve it?

    Read the article

  • Live Debugging

    - by Daniel Moth
    Based on my classification of diagnostics, you should know what live debugging is NOT about - at least according to me :-) and in this post I'll share how I think of live debugging. These are the (outer) steps to live debugging Get the debugger in the picture. Control program execution. Inspect state. Iterate between 2 and 3 as necessary. Stop debugging (and potentially start new iteration going back to step 1). Step 1 has two options: start with the debugger attached, or execute your binary separately and attach the debugger later. You might say there is a 3rd option, where the app notifies you that there is an issue, referred to as JIT debugging. However, that is just a variation of the attach because that is when you start the debugging session: when you attach. I'll be covering in future posts how this step works in Visual Studio. Step 2 is about pausing (or breaking) your app so that it makes no progress and remains "frozen". A sub-variation is to pause only parts of its execution, or in other words to freeze individual threads. I'll be covering in future posts the various ways you can perform this step in Visual Studio. Step 3, is about seeing what the state of your program is when you have paused it. Typically it involves comparing the state you are finding, with a mental picture of what you thought the state would be. Or simply checking invariants about the intended state of the app, with the actual state of the app. I'll be covering in future posts the various ways you can perform this step in Visual Studio. Step 4 is necessary if you need to inspect more state - rinse and repeat. Self-explanatory, and will be covered as part of steps 2 & 3. Step 5 is the most straightforward, with 3 options: Detach the debugger; terminate your binary though the normal way that it terminates (e.g. close the main window); and, terminate the debugging session through your debugger with a result that it terminates the execution of your program too. In a future post I'll cover the ways you can detach or terminate the debugger in Visual Studio. I found an old picture I used to use to map the steps above on Visual Studio 2010. It is basically the Debug menu with colored rectangles around each menu mapping the menu to one of the first 3 steps (step 5 was merged with step 1 for that slide). Here it is in case it helps: Stay tuned for more... Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Rapid Planning: Next Generation MRP

    - by john.bermudez
    MRP has been a mainstay of manufacturing systems for 40 years. MRP evolved from simple inventory planning systems to become the heart of the MRPII systems which eventually became ERP. While the applications surrounding it have become broader, more sophisticated and web-based, MRP continues to operate in the loneliness of the Saturday night batch window quietly exploding bills of materials and logging exceptions for hours. During this same 40 years, manufacturing business processes have seen countless changes and improvements including JIT, TQM, Six Sigma, Flow Manufacturing, Lean Manufacturing and Supply Chain Management. Although much logic has been added to MRP to deal with new manufacturing processes, it has not been able to keep up with the real-time pace of today's supply chain. As a result, planners have devised ingenious ways to trick MRP to handle new processes but often need to dump the output into spreadsheets of their own design in the hope of wrestling thousands of exceptions to ground. Oracle's new Rapid Planning application is just what companies still running MRP have been waiting for! The newest member of the Value Chain Planning product line, Rapid Planning is designed to empower planners with comprehensive supply planning that runs online in minutes, not hours. It enables a planner simulate the incremental impact of a new order or re-run an entire plan in a separate sandbox. Rapid Planning does a complete multi-level bill of material explosion like MRP but plans orders considering material and capacity constraints. Considering material and capacity constraints in planning can help you quickly reduce inventory and improve on-time shipments. Rapid Planning is an APS application that leverages years of Oracle development experience and customer feedback. Rather than rely exclusively on black-box heuristics, Rapid Planning is designed to give planners the computing power to use their industry experience and business knowledge to improve MRP. For example, Rapid Planning has a powerful worksheet user interface with built-in query capability that allows the planner to locate the orders she is interested in and use a mass update function to make quick work of large changes. The planner can save these queries and unique user interface to personalize their planning environment. Most importantly, Rapid Planning is designed to do supply planning in today's dynamic supply chain environment. It can be used to supplement MRP or replace MRP entirely. It generates plans that provide order-by-order details with aggregate key performance indicators that enable planners to quickly assess the overall business impact of a plan. To find out more about how Rapid Planning can help improve your MRP, please contact me at [email protected] or your Oracle Account Manager.

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

  • Force x86 CLR on 'Any CPU' .NET assembly

    - by jeffora
    In .NET, the 'Platform Target: Any CPU' compiler option allows a .NET assembly to run as 64bit on a x64 machine, and 32bit on an x86 machine. It is also possible to force an assembly to run as x86 on an x64 machine using the 'Platform Target: x86' compiler option. Is it possible to run an assembly with the 'Any CPU' flag, but determine whether it should be run in the x86 or x64 CLR? Normally this decision is made by the CLR/OS Loader (as is my understanding) based on the bitness of the underlying system. I am trying to write a C# .NET application that can interact with (read: inject code into) other running processes. x64 processes can only inject into other x64 processes, and the same with x86. Ideally, I would like to take advantage of JIT compilation and the Any CPU option to allow a single application to be used to inject into either x64 or x86 processes (on an x64 machine). The idea is that the application would be compiled as Any CPU. On an x64 machine it would run as x64. If the target process is x86, it should relaunch itself, forcing the CLR to run it as x86. Is this possible?

    Read the article

  • Understanding LinkDemand Security on a webserver

    - by robertpnl
    Hi, After deployment an ASP.Net application on a webserver, I get this error message by using code from a external assembly: "LinkDemand The type of the first permission that failed was: System.Security.PermissionSet The Zone of the assembly that failed was: MyComputer the error ". The assembly is include in the \bin folder and not in the GAC. I try to know what linkdemand exactly is and why this message will raised. But looking for more information, I don't get exactly the problem. I try also to add the PermissionSetAttribute on the class where the exception message happens: [System.Security.Permissions.PermissionSetAttribute(System.Security.Permissions.SecurityAction.LinkDemand, Name = "FullTrust")] Then the exception will be raised on another class of the assembly. And so on.. My questions ares: - what exactly is going wrong here? Is it true that I understand that .Net cannot check the code during Jit? - Is there maybe a security policy that block this (machine.config)? - Can I set the PermissionAttribute for all classes between a assembly? Thanks.

    Read the article

  • C# using namespace directive in nested namespaces

    - by MoSlo
    Right, I've usually used 'using' directives as follows using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq } i've recently seen examples of such as using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace AwesomeLib { //awesome award winning class declarations making use of Linq namespace DataLibrary { using System.Data; //Data access layers and whatnot } } Granted, i understand that i can put USING inside of my namespace declaration. Such a thing makes sense to me if your namespaces are in the same root (they organized). System; namespace 1 {} namespace 2 { System.data; } But what of nested namespaces? Personally, I would leave all USING declarations at the top where you can find them easily. Instead, it looks like they're being spread all over the source file. Is there benefit to the USING directives being used this way in nested namespaces? Such as memory management or the JIT compiler?

    Read the article

  • x86 opcode alignment references and guidelines

    - by mrjoltcola
    I'm generating some opcodes dynamically in a JIT compiler and I'm looking for guidelines for opcode alignment. 1) I've read comments that briefly "recommend" alignment by adding nops after calls 2) I've also read about using nop for optimizing sequences for parallelism. 3) I've read that alignment of ops is good for "cache" performance Usually these comments don't give any supporting references. Its one thing to read a blog or a comment that says, "its a good idea to do such and such", but its another to actually write a compiler that implements specific op sequences and realize most material online, especially blogs, are not useful for practical application. So I'm a believer in finding things out myself (disassembly, etc. to see what real world apps do). This is one case where I need some outside info. I notice compilers will usually start an odd byte instruction immediately after whatever previous instruction sequence there was. So the compiler is not taking any special care in most cases. I see "nop" here or there, but usually it seems nop is used sparingly, if at all. How critical is opcode alignment? Can you provide references for cases that I can actually use for implementation? Thanks.

    Read the article

  • Compile MonoDevelop 4.2.3

    - by user2942643
    I need help, I'm trying to compile monodevelop code, but when I use the command "./configure" tells me that I need to have installed a version of mono, but I have it installed [raven@localhost ~]$ mono -V Mono JIT compiler version 3.2.8 (tarball Fri May 30 08:15:47 CDT 2014) Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen [raven@localhost ~]$ cd /home/raven/Downloads/monodevelop-4.2.3 [raven@localhost monodevelop-4.2.3]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /usr/bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking how to create a ustar tar archive... gnutar checking whether to enable maintainer-specific portions of Makefiles... no checking for mono... /usr/local/bin/mono checking for gmcs... /usr/local/bin/gmcs checking for pkg-config... /usr/bin/pkg-config configure: error: You need mono 3.0.4 or newer [raven@localhost monodevelop-4.2.3]$

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

  • Strange error when filling a data adapter.

    - by Tim C
    I am receiving the following error in my code (c#, .Net 3.5, VS2008) when I try to connect to an Excel sheet and fill a OleDbDataAdapter with the results of a query. First the error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. And here is the code, which is honestly pretty simple: var excelFileName = string.Format("c:/Metadata_Tool.xlsm"); var connectionString = string.Format("Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; Extended Properties=Excel 12.0;HDR=YES;", excelFileName); var adapter = new OleDbDataAdapter("Select * FROM [Video Tagging XML]", connectionString); var ds = new DataSet(); adapter.Fill(ds, "VTX"); DataTable data = ds.Tables["VTX"]; foreach (DataRow myRow in data.Rows) { foreach (DataColumn myColumn in data.Columns) { Console.Write("\t{0}", myRow[myColumn]); } Console.WriteLine(); } Console.ReadLine(); I get the error on the line adapter.Fill(ds,"VTX");. I did find a microsoft forum post saying to turn on JIT optimization in VS2008 from the Tools/Options/Debug/General menu, but this did not seem to help. Any help would be greatly appreciated thanks!

    Read the article

< Previous Page | 3 4 5 6 7 8 9  | Next Page >