Search Results

Search found 7672 results on 307 pages for 'compiler optimization'.

Page 124/307 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Apache on CentOS 5.9 VM serves my optimized images corrupted (but my Mac doesn't)

    - by Robert K
    I'm using a Vagrant VM to mirror the client's environment as closely as I can. As part of our build process we do no optimization of assets early on; that comes as we're ready to take a site live. Needless to say, this issue is beginning to worry me as we need to take the site live very soon. I use ImageOptim to automate optimization of image assets, which runs a whole series of tools (Zopfli, PNGOUT, OptiPNG, AdvPNG, PNGCrush). I always set the optimizations to their maximum setting. After optimization, my PNGs start looking like this: What's weird is, if I serve the same file through my Mac's copy of Apache, not through Vagrant, the image loads fine. In fact, the only time it's ever corrupt like this is when the image is served from the Vagrant VM and its install of Drupal. All optimized JPEGs display only the first ~20% of the image. And PNGs, depending on the image, may show either a portion or the "progressive"-style corruption below. The browser itself makes no difference, the same browser will serve an uncorrupted image from my Mac's Apache instance and a corrupt image from the VM. When I disable all PNG optimizations except PNGCrush, and the removal of the PNG metadata, the image is served corrupted. I'm optimizing JPEG images with JPEGmini. The server is running CentOS 5.9, Apache 2.2.3-85, PHP 5.3.3, and Drupal 7. As best as I can tell the error lies somewhere within the VM, either with Apache or with (perhaps) the network stack. Seems like the tools that optimize the compression of the PNGs and JPEGs are what trigger this error. I've already determined that the .htaccess file isn't interfering with how the images load. What should I try to troubleshoot this?

    Read the article

  • cool project to use a genetic algorithm for?

    - by Ryan
    I'm looking for a practical application to use a genetic algorithm for. Some things that have thought of are: Website interface optimization Vehicle optimization with a physics simulator Genetic programming Automatic test case generation But none have really popped out at me. So if you had some free time (a few months) to spend on a genetic algorithms project, what would you choose to tackle?

    Read the article

  • procedure that swaps the bytes (low/high) of a Word variable

    - by Altar
    Hi. I have this procedure that swaps the bytes (low/high) of a Word variable (It does the same stuff as System.Swap function). The procedure works when the compiler optimization is OFF but not when it is ON. Can anybody help me with this? { UNSAFE! IT IS NOW WORKING WHEN COMPILER OPTIMIZATION IS ON ! } procedure SwapWord_NotWorking(VAR TwoBytes: word); asm Mov EBX, TwoBytes Mov AX, [EBX] XCHG AL,AH Mov [EBX], AX end;

    Read the article

  • How to compile x64 asp.net website?

    - by Eran Betzalel
    I'm trying to compile (using Visual Studio) an ASP.Net website with the Chilkat library. The compilation fails due to this error: Could not load file or assembly 'ChilkatDotNet2, Version=9.0.8.0, Culture=neutral, PublicKeyToken=eb5fc1fc52ef09bd' or one of its dependencies. An attempt was made to load a program with an incorrect format. I've been told that this error occurs because of platform noncompliance. The weird thing is that although the compilation fails, the site works once accessed from a browser. My theory is that the IIS compilation uses csc.exe compiler from the Framework64 (64 bit) folder while the Visual Studio uses csc.exe compiler from the Framework (32 bit) folder. If this is acually it, how can I configure my Visual studio to run with the 64 bit compiler for ASP.Net sites? This is my current development configuration: Windows 7 (x64). Visual Studio 2008 Pro (x86 of course...). Chilkat library (x64) IIS/Asp.net (x64).

    Read the article

  • Good IDE for Mono on Windows

    - by Paja
    I would like to develop Mono application for Win/Linux/Mac in C# on Windows. Is there any really good (Visual Studio comparable) IDE for that? The best would be if I could manage Visual C# Express to compile solutions using the Mono compiler. I've found a #develop IDE, which looks very cool and has many features that Express edition of the Visual Studio hasn't (like plugins for TortoiseSVN, NUnit, etc). Hovewer the 3.* versions dropped support for Mono, so you are no longer able to compile solutions using the Mono compiler. There is also a MonoDevelop. I've tried it and it sucks. Not comparable to Visual Studio at all. No WinForms designer, + tons of other missing features. I would just like if they would drop the development of MonoDevelop and build a plugin for #develop instead. Is there any other good enough IDE, or is it possible to make the Visual C# Express or #develop compile the solutions with Mono compiler?

    Read the article

  • System.Reflection - Global methods aren't available for reflection

    - by mrjoltcola
    I have an issue with a semantic gap between the CLR and System.Reflection. System.Reflection does not (AFAIK) support reflecting on global methods in an assembly. At the assembly level, I must start with the root types. My compiler can produce assemblies with global methods, and my standard bootstrap lib is a dll that includes some global methods. My compiler uses System.Reflection to import assembly metadata at compile time. It seems if I depend on System.Reflection, global methods are not a possibility. The cleanest solution is to convert all of my standard methods to class static methods, but the point is, my language allows global methods, and the CLR supports it, but System.Reflection leaves a gap. ildasm shows the global methods just fine, but I assume it does not use System.Reflection itself and goes right to the metadata and bytecode. Besides System.Reflection, is anyone aware of any other 3rd party reflection or disassembly libs that I could make use of (assuming I will eventually release my compiler as free, BSD licensed open source).

    Read the article

  • Embedding swank-clojure in java program

    - by user237417
    Based on the Embedding section of http://github.com/technomancy/swank-clojure, I'm using the following to test it out. Is there a better way to do this that doesn't use Compiler? Is there a way to programmatically stop swank? It seems start-repl takes control of the thread. What would be a good way to spawn off another thread for it and be able to kill that thread programatically. import clojure.lang.Compiler; import java.io.StringReader; public class Embed { public static void main(String[] args) throws Exception { final String startSwankScript = "(ns my-app\n" + " (:use [swank.swank :as swank]))\n" + "(swank/start-repl) "; Compiler.load(new StringReader(startSwankScript)); } } Any help much appreciated, hhh

    Read the article

  • How to complie x64 asp.net website?

    - by Eran Betzalel
    I'm trying to compile (using Visual Studio) an ASP.Net website with the Chilkat library. The compilation fails due to this error: Could not load file or assembly 'ChilkatDotNet2, Version=9.0.8.0, Culture=neutral, PublicKeyToken=eb5fc1fc52ef09bd' or one of its dependencies. An attempt was made to load a program with an incorrect format. I've been told that this error occurs because of platform noncompliance. The weird thing is that although the compilation fails, the site works once accessed from a browser. My theory is that the IIS compilation uses csc.exe compiler from the Framework64 (64 bit) folder while the Visual Studio uses csc.exe compiler from the Framework (32 bit) folder. If this is acually it, how can I configure my Visual studio to run with the 64 bit compiler for ASP.Net sites? This is my current development configuration: Windows 7 (x64). Visual Studio 2008 Pro (x86 of course...). Chilkat library (x64) IIS/Asp.net (x64).

    Read the article

  • x86 assembler question

    - by b-gen-jack-o-neill
    Hi, I have 2 simple, but maybe tricky questions. Let´s say I have assembler instruction: MOV EAX,[ebx+6*7] - what I am curious is, does this instruction really actually translates into opcode as it stands,so computation of code in brackets is encoded into opcode, or is this just pseudo intruction for compiler, not CPU, so that compiler before computes the value in brackets using add mul and so, store outcome in some reg and than uses MOV EAX,reg with computed value? Just to be clear, I know the output will be the same. I am interested in execution. Second is about LEA instruction. I know what it does, but I am more interested wheather its real instruction, so compiles does not further change it, just make it into opcode as it stands, or just pseudo code for compiler to, again, first compute adress and than store it.

    Read the article

  • Compiling Scala scripts. How works scalac?

    - by Arturo Herrero
    Groovy Groovy comes with a compiler called groovyc. For each script, groovyc generates a class that extends groovy.lang.Script, which contains a main method so that Java can execute it. The name of the compiled class matches the name of the script being compiled. For example, with this HelloWorld.groovy script: println "Hello World" That becomes something like this code: class HelloWorld extends Script { public static void main(String[] args) { println "Hello World" } } Scala Scala comes with a compiler called scalac. I don't know how it works. For example, with the same HelloWorld.scala script: println("Hello World") The code is not valid for scalac, because the compiler expected class or object definition, but works in Scala REPL interpreter. How is possible? Is it wrapped in a class before execution?

    Read the article

  • Virtual Function Implementation

    - by Gokul
    Hi, I have kept hearing this statement. Switch..Case is Evil for code maintenance, but it provides better performance(since compiler can inline stuffs etc..). Virtual functions are very good for code maintenance, but they incur a performance penalty of two pointer indirections. Say i have a base class with 2 subclasses(X and Y) and one virtual function, so there will be two virtual tables. The object has a pointer, based on which it will choose a virtual table. So for the compiler, it is more like switch( object's function ptr ) { case 0x....: X->call(); break; case 0x....: Y->call(); }; So why should virtual function cost more, if it can get implemented this way, as the compiler can do the same in-lining and other stuff here. Or explain me, why is it decided not to implement the virtual function execution in this way? Thanks, Gokul.

    Read the article

  • How to get `gcc` to generate `bts` instruction for x86-64 from standard C?

    - by Norman Ramsey
    Inspired by a recent question, I'd like to know if anyone knows how to get gcc to generate the x86-64 bts instruction (bit test and set) on the Linux x86-64 platforms, without resorting to inline assembly or to nonstandard compiler intrinsics. Related questions: Why doesn't gcc do this for a simple |= operation were the right-hand side has exactly 1 bit set? How to get bts using compiler intrinsics or the asm directive Portability is more important to me than bts, so I won't use and asm directive, and if there's another solution, I prefer not to use compiler instrinsics. EDIT: The C source language does not support atomic operations, so I'm not particularly interested in getting atomic test-and-set (even though that's the original reason for test-and-set to exist in the first place). If I want something atomic I know I have no chance of doing it with standard C source: it has to be an intrinsic, a library function, or inline assembly. (I have implemented atomic operations in compilers that support multiple threads.)

    Read the article

  • A question about friend functions

    - by Als
    I faced a problem recently with a 3rd party library which generates classes from a xml. Here is a gist of it: class B; class A { void doSomething(); friend class B; }; class B { void doSomething(); void doSomethingMore() { doSomething(); } }; The compiler flags call to the function doSomething() as ambiguous and flags it as an compiler error. It is easy to understand why it gives the error.Class B being friend of class A, every member of class B has access to all the members of class A. Renaming of the either of functions resolved my problem but it got me thinking that shouldn't in this case the compiler should give a priority to the class's own member function over the function in another class of which it is a friend?

    Read the article

  • Only compiles as an array of pointers, not array of arrays

    - by Dustin
    Suppose I define two arrays, each of which have 2 elements (for theoretical purposes): char const *arr1[] = { "i", "j" }; char const *arr2[] = { "m", "n" }; Is there a way to define a multidimensional array that contains these two arrays as elements? I was thinking of something like the following, but my compiler displays warnings about incompatible types: char const *combine[][2] = { arr1, arr2 }; The only way it would compile was to make the compiler treat the arrays as pointers: char const *const *combine[] = { arr1, arr2 }; Is that really the only way to do it or can I preserve the type somehow (in C++, the runtime type information would know it is an array) and treat combine as a multidimensional array? I realise it works because an array name is a const pointer, but I'm just wondering if there is a way to do what I'm asking in standard C/C++ rather than relying on compiler extensions. Perhaps I've gotten a bit too used to Python's lists where I could just throw anything in them...

    Read the article

  • Possibility of language data type not mapped to shipped .NET Framework?

    - by John K
    Does anybody know of a managed programming language implemented on .NET that contains a specialized data type that is not mapped through to the Common Type System/FCL/BCL or one that does not have a shipped .NET equivalent (e.g. shipped standard types like System.String, System.Int32)? This question would likely come from the perspective of someone porting a compiler (although I'm not doing that). Is it as simple as the language creating a new data type outside the BCL/FCL for its specialized type? If so does this hinder interoperability between programming languages that are otherwise accustomed to mapping all their built-in data types to what's in the BCL/FCL, like Visual Basic and C#? I can imagine this situation might come about if an obscure language compiler of some kind is ported to .NET for which there is no direct mapping of one of its implicit data types to the shipped Framework. How is this situation supported or allowed in general? What would be the expectation of the compiler and the Common Language Runtime?

    Read the article

  • Compiling CSS to SWF server side Java, What is the best practice?

    - by DataSurfer
    My project allows users to create custom css for our flex app. In regards to compiling the CSS into SWFs on the server side: Should I use the flex2.compiler.css.Compiler class in mxmlc-3.5.0.12683.jar? Or Should I invoke mxmlc from Runtime.getRuntime().exec()? The css.Compiler class is not very well documented. Does anyone have any examples that use this? For the Runtime exec method, what is the best way to package mxmlc into the maven build so its available to the server at runtime?

    Read the article

  • sqrt(int_value + 0.0) ? The point?

    - by Earlz
    Hello, while doing some homework in my very strange C++ book, which I've been told before to throw away, had a very peculiar code segment. I know homework stuff always throws in extra "mystery" to try to confuse you like indenting 2 lines after a single-statement for-loop. But this one I'm confused on because it seems to serve some real-purpose. basically it is like this: int counter=10; ... if(pow(floor(sqrt(counter+0.0)),2) == counter) ... I'm interested in this part especially: sqrt(counter+0.0) Is there some purpose to the +0.0? Is this the poormans way of doing a static cast to a double? Does this avoid some compiler warning on some compiler I do not use? The entire program printed the exact same thing and compiled without warnings on g++ whenever I left out the +0.0 part. Maybe I'm not using a weird enough compiler?

    Read the article

  • Explanation of output

    - by Anon
    My program class Building { Building() { System.out.print("b "); } Building(String name) { this(); System.out.print("bn " + name); } }; public class House extends Building { House() { System.out.print("h "); // this is line# 1 } House(String name) { this(); // This is line#2 System.out.print("hn " + name); } public static void main(String[] args) { new House("x "); } } We know that compiler will write a call to super() as the first line in the child class's constructor. Therefore should not the output be: b (call from compiler written call to super(), before line#2 b (again from compiler written call to super(),before line#1 ) h hn x But the output is b h hn x Why is that?

    Read the article

  • template expressions and visual studio 2005 c++

    - by chris
    I'd like to build the olb3d library with my visual studio 2005 compiler but this failes due to template errors. To be more specific, the following expression seem to be a problem: void function(T u[Lattice::d]) On the website of the project is stated that prpably my compiler is not capable of such complicated template expressions - one should use the gcc 3.4.1. My question is now if there is a way to upgrade my vs c++ compiler so it can handle template expressions on the level as the gcc 3.4.1? Maybe it helps if I get a newer version of visual studio? Cheers C.

    Read the article

  • Assign bitset member to char

    - by RedX
    I have some code here that uses bitsets to store many 1 bit values into a char. Basically struct BITS_8 { char _1:1; (...) char _8:1; } Now i was trying to pass one of these bits as a parameter into a function void func(char bit){ if(bit){ // do something }else{ // do something else } // and the call was struct BITS_8 bits; // all bits were set to 0 before bits._7 = 1; bits._8 = 1; func(bits._8); The solution was to single the bit out when calling the function: func(bits._8 & 0x128); But i kept going into //do something because other bits were set. I was wondering if this is the correct behaviour or if my compiler is broken. The compiler is an embedded compiler that produces code for freescale ASICs.

    Read the article

  • Eclipse bug? Switching on a null with only default case

    - by polygenelubricants
    I was experimenting with enum, and I found that the following compiles and runs fine on Eclipse (Build id: 20090920-1017, not sure exact compiler version): public class SwitchingOnAnull { enum X { ,; } public static void main(String[] args) { X x = null; switch(x) { default: System.out.println("Hello world!"); } } } When compiled and run with Eclipse, this prints "Hello world!" and exits normally. With the javac compiler, this throws a NullPointerException as expected. So is there a bug in Eclipse Java compiler?

    Read the article

  • How to disambiguate subdirs with the same name in the Projects list?

    - by jlstrecker
    My Qt project has 2 subdirs/subprojects with the same name. Their directories are myproject/node and myproject/compiler/test/node. The problem (or annoyance) is that, in the Projects list in Qt, both subdirs are listed as "node". So you have to open them up to figure out which is which. myproject.pro is like this: TEMPLATE = subdirs QMAKE_CLEAN = Makefile SUBDIRS += \ compiler_test_node \ node \ ... compiler_test_node.subdir = compiler/test/node node.depends = compiler_vuo_compile ... Without renaming the myproject/compiler/test/node directory, is there a way to make it show up with a different name in the Projects list?

    Read the article

  • ANSI C++: Diferences between delete and delete[]

    - by Sunscreen
    I was looking a snipset of code: int* ip; ip = new int[100]; delete ip; The example above states that: "This code will work with many compilers, but it should instead read:" int* ip; ip = new int[100]; delete [] ip; Is this indeed the case? I use the compiler "Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 11.00.7022 for 80x86" and does not complain (first example) while compiling. At runtime the pointer is set to NULL. Other compilers behave diferrently? Can a compiler not compain and issues can appear at runtime? Thanks, Sun

    Read the article

  • Converting a macro to an inline function

    - by Rob
    I am using some Qt code that adds a VERIFY macro that looks something like this: #define VERIFY(cond) \ { \ bool ok = cond; \ Q_ASSERT(ok); \ } The code can then use it whilst being certain the condition is actually evaluated, e.g.: Q_ASSERT(callSomeFunction()); // callSomeFunction not evaluated in release builds! VERIFY(callSomeFunction()); // callSomeFunction is always evaluated Disliking macros, I would instead like to turn this into an inline function: inline VERIFY(bool condition) { Q_ASSERT(condition); } However, in release builds I am worried that the compiler would optimise out all calls to this function (as Q_ASSERT wouldn't actually do anything.) I am I worrying unnecessarily or is this likely depending on the optimisation flags/compiler/etc.? I guess I could change it to: inline VERIFY(bool condition) { condition; Q_ASSERT(condition); } But, again, the compiler may be clever enough to ignore the call. Is this inline alternative safe for both debug and release builds?

    Read the article

  • Annotation retention policy: what real benefit is there in declaring `SOURCE` or `CLASS`?

    - by watery
    I know there are three retention policies for Java annotations: CLASS: Annotations are to be recorded in the class file by the compiler but need not be retained by the VM at run time. RUNTIME: Annotations are to be recorded in the class file by the compiler and retained by the VM at run time, so they may be read reflectively. SOURCE: Annotations are to be discarded by the compiler. And although I understand their usage scenarios, I don't get why it is such an important thing to specify the retention policy that retention policies exist at all. I mean, why aren't all the annotations just kept at runtime? Do they generate so much bytecode / occupy so much memory that stripping those undeclared as RUNTIME does make that much difference?

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >