Search Results

Search found 3296 results on 132 pages for 'executable compression'.

Page 84/132 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Debugging Silverlight crash

    - by RHLopez
    I am trying to debug an IE 8 crash caused by a Silverlight application. I managed to find some articles on how to do a memory dump when a process crashes. I loaded the dump in windbg and ran !analyze -v. Below is the result. I am stuck at what further steps I can take to figure out what module or library that is running in Silverlight is causing the crash. So all I have right now is the crash in IE is caused by an Access violation (attempt to execute non-executable address) and from what is in the stack trace that some animation is running in Silverlight. Any tips or articles that would help me debug this will be appreciated. This dump file has an exception of interest stored in it. The stored exception information can be accessed via .ecxr. (1864.1560): Access violation - code c0000005 (first/second chance not available) eax=00000000 ebx=00000000 ecx=1b11fc58 edx=5c6f007d esi=00000000 edi=193b8e08 eip=00000000 esp=0f61f750 ebp=0f61f76c iopl=0 nv up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010206 00000000 ?? ??? FAULTING_IP: +56b3952f04ebde68 748bc9f1 654c dec esp EXCEPTION_RECORD: ffffffff -- (.exr 0xffffffffffffffff) ExceptionAddress: 748bc9f1 ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 00000008 Parameter[1]: 00000000 Attempt to execute non-executable address 00000000 PROCESS_NAME: iexplore.exe ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. EXCEPTION_PARAMETER1: 00000008 EXCEPTION_PARAMETER2: 00000000 WRITE_ADDRESS: 00000000 FOLLOWUP_IP: agcore!CFrameworkElement::SetValue+1d7 5c704fa8 84c0 test al,al FAILED_INSTRUCTION_ADDRESS: +56b3952f04ebde68 748bc9f1 654c dec esp NTGLOBALFLAG: 0 APPLICATION_VERIFIER_FLAGS: 0 FAULTING_THREAD: 00001560 BUGCHECK_STR: APPLICATION_FAULT_SOFTWARE_NX_FAULT_NULL PRIMARY_PROBLEM_CLASS: SOFTWARE_NX_FAULT_NULL DEFAULT_BUCKET_ID: SOFTWARE_NX_FAULT_NULL LAST_CONTROL_TRANSFER: from 5c704fa8 to 00000000 STACK_TEXT: WARNING: Frame IP not in any known module. Following frames may be wrong. 0f61f74c 5c704fa8 1b17a134 193b8e08 0e690e14 0x0 0f61f76c 5c712360 0e690e14 1b17a134 0e690e14 agcore!CFrameworkElement::SetValue+0x1d7 0f61f788 5c7123a8 0e690e14 1b17a134 0e690e14 agcore!CShape::SetValue+0x72 0f61f7a0 5c70a6ff 0e690e14 1b17a134 00000000 agcore!CEllipse::SetValue+0x3b 0f61f7d0 5c752c2b 1b17a090 193b8e08 00000000 agcore!CAnimation::DoSetValue+0x50 0f61f810 5c7a7fb1 0f61f884 0f61f868 1b17a090 agcore!CAnimation::UpdateAnimationUsingKeyFrames+0x3b5 0f61f82c 5c707146 00000000 00000000 00000000 agcore!CAnimation::UpdateAnimation+0x184 0f61f87c 5c7071e5 3e4c8000 0f61f8cc 00000000 agcore!CTimeline::ComputeState+0x13a 0f61f89c 5c706d49 193f82b0 0f61f8cc 0f61f8d4 agcore!CTimelineGroup::ComputeState+0x8c 0f61f8ac 5c7069c7 3e4c8000 0f61f8cc 0b111f60 agcore!CStoryboard::ComputeState+0x48 0f61f8d4 5c706a29 0e6a0ca0 00000000 0e490070 agcore!CTimeManager::Tick+0x79 0f61f8e8 5c78f960 0b0e6d68 0f61f990 00000000 agcore!CCoreServices::Tick+0x21 0f61f940 5c706ac2 0b111f60 0e42ca08 ffffffff agcore!CCoreServices::Draw+0x140 0f61f964 67ac141c 0af99b90 00000000 0f61f990 agcore!CCoreServices::Draw+0x2d 0f61f9b4 67a933c2 0f61f9c8 00000000 00000000 npctrl!CXcpBrowserHost::OnTick+0x1b1 0f61f9e0 67a927c6 0064069c 00000402 00000000 npctrl!CXcpDispatcher::Tick+0xf3 0f61fa08 67a92709 0064069c 00000402 00000000 npctrl!CXcpDispatcher::OnReentrancyProtectedWindowMessage+0xcd 0f61fa28 764b6238 0064069c 00000402 00000000 npctrl!CXcpDispatcher::WindowProc+0xb8 0f61fa54 764b68ea 67a9269d 0064069c 00000402 user32!InternalCallWinProc+0x23 0f61facc 764b7d31 00000000 67a9269d 0064069c user32!UserCallWinProcCheckWow+0x109 0f61fb2c 764b7dfa 67a9269d 00000000 0f61fbb4 user32!DispatchMessageWorker+0x3bc 0f61fb3c 6fe504a6 0f61fb54 00000000 0ab11908 user32!DispatchMessageW+0xf 0f61fbb4 6fe60446 0af956a0 00000000 0b18a338 ieframe!CTabWindow::_TabWindowThreadProc+0x452 0f61fc6c 769d49bd 0ab11908 00000000 0f61fc88 ieframe!LCIETab_ThreadProc+0x2c1 0f61fc7c 76e53677 0b18a338 0f61fcc8 77829d72 iertutil!CIsoScope::RegisterThread+0xab 0f61fc88 77829d72 0b18a338 7dbc895d 00000000 kernel32!BaseThreadInitThunk+0xe 0f61fcc8 77829d45 769d49af 0b18a338 00000000 ntdll!__RtlUserThreadStart+0x70 0f61fce0 00000000 769d49af 0b18a338 00000000 ntdll!_RtlUserThreadStart+0x1b SYMBOL_STACK_INDEX: 1 SYMBOL_NAME: agcore!CFrameworkElement::SetValue+1d7 FOLLOWUP_NAME: MachineOwner MODULE_NAME: agcore IMAGE_NAME: agcore.dll DEBUG_FLR_IMAGE_TIMESTAMP: 4a67e422 STACK_COMMAND: ~44s; .ecxr ; kb FAILURE_BUCKET_ID: SOFTWARE_NX_FAULT_NULL_c0000005_agcore.dll!CFrameworkElement::SetValue BUCKET_ID: APPLICATION_FAULT_SOFTWARE_NX_FAULT_NULL_BAD_IP_agcore!CFrameworkElement::SetValue+1d7

    Read the article

  • How to compress CSS/JS in VS2010 Web Deployment Build Template?

    - by RPM1984
    Hi all, We've recently upgraded from VS2008 - VS2010 (and hence upgrading from Web Deployment Project to proper deployment project). Obviously what's new in VS2010 web deployments is the introduction of Workflow as the build process template. Previously, we used a MSBuild task in the WDP to execute the Yahoo YUI Javascript/CSS compression module to minify/compress javascript and css files. Has anyone managed to accomplish this task with Visual Studio 2010? I have seen the new "SquishIt" compressor created by Justin Etheridge, but its not ideal as it "squishes" on the fly (e.g on Application_Start - Global.ascx) - which means you still have to push out all the uncompressed files to your web server before squishing. In the Workflow designer - i can see a toolbox item called "MSBuild" - just dont know how to use it to accomplish what i want. Been searching high and wide, no-one seems to know how. Surely someone out there has done this.

    Read the article

  • Deploying an HttpHandler web service

    - by baron
    I am trying to build a webservice that manipulates http requests POST and GET. Here is a sample: public class CodebookHttpHandler: IHttpHandler { public void ProcessRequest(HttpContext context) { if (context.Request.HttpMethod == "POST") { //DoHttpPostLogic(); } else if (context.Request.HttpMethod == "GET") { //DoHttpGetLogic(); } } ... public void DoHttpPostLogic() { ... } public void DoHttpGetLogic() { ... } I need to deploy this but I am struggling how to start. Most online references show making a website, but really, all I want to do is respond when an HttpPost is sent. I don't know what to put in the website, just want that code to run. Edit: I am following this site as its exactly what I'm trying to do. I have the website set up, I have the code for the handler in a .cs file, i have edited the web.config to add the handler for the file extension I need. Now I am at the step 3 where you tell IIS about this extension and map it to ASP.NET. Also I am using IIS 7 so interface is slightly different than the screenshots. This is the problem I have: 1) Go to website 2) Go to handler mappings 3) Go Add Script Map 4) request path - put the extension I want to handle 5) Executable- it seems i am told to set aspnet_isapi.dll here. Maybe this is incorrect? 6) Give name 7) Hit OK button: Add Script Map Do you want to allow this ISAPI extension? Click "Yes" to add the extension with an "Allowed" entry to the ISAPI and CGI Restrictions list or to update an existing extension entry to "Allowed" in the ISAPI and CGI Restrictions list. Yes No Cancel 8) Hit Yes Add Script Map The specified module required by this handler is not in the modules list. If you are adding a script map handler mapping, the IsapiModule or the CgiModule must be in the modules list. OK edit 2: Have just figured out that that managed handler had something to do with handlers witten in managed code, script map was to help configuring an executable and module mapping to work with http Modules. So I should be using option 1 - Add Managed Handler. See: http://yfrog.com/11managedhandlerp I know what my request path is for the file extension... and I know name (can call it whatever I like), so it must be the Type field I am struggling with. In the applications folder (in IIS) so far I just have the MyHandler.cs and web.config (Of course also a file with the extension I am trying to create the handler for!) edit3: progress So now I have the code and the web.config set up I test to see If I can browse to the filename.CustomExtension file: HTTP Error 404.3 - Not Found The page you are requesting cannot be served because of the extension configuration. If the page is a script, add a handler. If the file should be downloaded, add a MIME map. So in IIS7 I go to Handler Mappings and add it in. See this MSDN example, it is exactly what I am trying to follow The class looks like this: using System.Web; namespace HandlerAttempt2 { public class MyHandler : IHttpHandler { public MyHandler() { //TODO: Add constructor logic here } public void ProcessRequest(HttpContext context) { var objResponse = context.Response; objResponse.Write("<html><body><h1>It just worked"); objResponse.Write("</body></html>"); } public bool IsReusable { get { return true; } } } } I add the Handler in as follows: Request path: *.whatever Type: MyHandler (class name - this appears correct as per example!) Name: whatever Try to browse to the custom file again (this is in app pool as Integrated): HTTP Error 500.21 - Internal Server Error Handler "whatever" has a bad module "ManagedPipelineHandler" in its module list Try to browse to the custom file again (this is in app pool as CLASSIC): HTTP Error 404.17 - Not Found The requested content appears to be script and will not be served by the static file handler.

    Read the article

  • MSBuild Extension Pack Zip the folders and subfolders

    - by ManojTrek
    I have to Zip my folders and subfolders Using MSbuild, I was looking at the MSBuild Extension pack, and tried this <ItemGroup> <ZipFiles Include="\Test\Web\**\*.*" > <Group>Release</Group> </ZipFiles> </ItemGroup> <MSBuild.ExtensionPack.Compression.Zip TaskAction="Create" CompressFiles="@(ZipFiles)" ZipFileName="$(WorkingDir)%(ZipFiles.Group).zip"/> When I do this it just keep adding all the files to root, instead of adding it into the specific subfolder within the zip file. I am missing something, can anyone help here please.

    Read the article

  • OpenPGP Signing

    - by singpolyma
    I'm reading RFC4880 in an attempt to produce an implementatdion of a subset of OpenPGP (RSA signatures) using http://phpseclib.sourceforge.net/. I have the publickey and compression-literal-signature packets parsed out. I can extract n and e and feed them to Crypt_RSA to construct a verifier. I tell it I'm using sha256. It then needs a "message" and a " signature" parametre. I get the signature data out of the signature packet no problem. The question I have is: what is "message"? According to sec tion 5.2.4 it's some combination of the literal data packet(s?) (their bodies or the whole packet?) and the "hashed" subpackets. Do I just concat all the data packets and the hashed packets together in the order they appear?

    Read the article

  • WinQual: Why would WER not accept code-signing certificates?

    - by Ian Boyd
    In 2005 i tried to establish a WinQual account with Microsoft, so i could pick up our (if any) crash dump files submitted automatically through Windows Error Reporting (WER). i was not allowed to have my crash dumps, because i don't have a Verisign certificate. Instead i have a cheaper one, generated by a Verisign subsidiary: Thawte. The method in which you join is: you digitally sign a sample exe they provide. This proves that you are the same signer that signed apps that they got crash dumps from in the wild. Cryptographically, the private key is needed to generate a digital signature on an executable. Only the holder of that private key can create a signature with for the matching public key. It doesn't matter who generated that private key. That includes certificates that are generated from: self-signing Wells Fargo DigiCert SecureTrust Trustware QuoVadis GoDaddy Entrust Cybertrust GeoTrust GlobalSign Comodo Thawte Verisign Yet Microsof's WinQual only accepts digital certificates generated by Verisign. Not even Verisign's subsidiaries are good enough (Thawte). Can anyone think of any technical, legal or ethical reason why Microsoft doesn't want to accept code-signing certificates? The WinQual site says: Why Is a Digital Certificate Required for Winqual Membership? A digital certificate helps protect your company from individuals who seek to impersonate members of your staff or who would otherwise commit acts of fraud against your company. Using a digital certificate enables proof of an identity for a user or an organization. Is somehow a Thawte digital certificate not secure? Two years later, i sent a reminder notice to WinQual that i've been waiting to be able to get at my crash dumps. The response from WinQual team was: Hello, Thanks for the reminder. We have notified the appropriate people that this is still a request. In 2008 i asked this question in a Microsoft support forum, and the response was: We are only setup to accept VeriSign Certificates at this point. We have not had an overwhelming demand to support other types of certificates. What can it possibly mean to not be "setup" to accept other kinds of certificates? If the thumbprint of the key that signed the WinQual.exe test app is the same as the thumbprint that signed the executable who's crash dump you got in the wild: it is proven - they are my crash dumps, give them to me. And it's not like there's a special API to check if a Verisign digital signature is valid, as opposed to all other digital signatures. A valid signature is valid no matter who generated the key. Microsoft is free to not trust the signer, but that's not the same as identity. So that is my question, can anyone think of any practical reason why WinQual isn't setup to support digital signatures? One person theorized that the answer is that they're just lazy: Not that I know but I would assume that the team running the winQual system is a live team and not a dev team - as in, personality and skillset geared towards maintenance of existing systems. I could be wrong though. They don't want to do work to change it. But can anyone think of anything that would need to be changed? It's the same logic no matter what generated the key: "does the thumbprint match". What am i missing?

    Read the article

  • Optimizing website - minification, sprites, etc...

    - by nivlam
    I'm looking at the product Aptimize Website Accelerator, which is an ISAPI filter that will concatenate files, minify css/javascript, and more. Does anyone have experience with this product, or any other "all-in-one" solutions? I'm interesting in knowing whether something like this would be good long-term, or would manually setting up all the components (integrate YUICompress into the build process, setting up gzip compression, tweaking expiration headers, etc...) be more beneficial? An all-in-one solution like this looks very tempting, as it could save a lot of time if our website is "less than optimal". But how efficient are these products? Would setting up the components manually generate better results? Or would the gap between the all-in-one solution and manually setting up the component be so small, that it's negligible?

    Read the article

  • Ruby/Rails Audio Conversion Plugins?

    - by coneybeare
    I am looking for a good gem/plugin to convert user-uploaded audio files to different formats. One format in particular that I am interested in is converting to Apple .caf with ima4 compression for inclusion in an iPhone app. I have been using afconvert on my mac for this so far, but I need to do it on my linux box, server-side. Ideally, I would be able to work into paperclip. As an additional solution, ffmpeg could work, but I have not seen any .caf options for it. Anybody know of one?

    Read the article

  • Tomcat gzip while chunked issue

    - by hoodoos
    I'm expiriencing some problem with one of my data source services. As it says in HTTP response headers it's running on Apache-Coyote/1.1. Server gives responses with Transfer-Encoding: chunked, here sample response: HTTP/1.1 200 OK Server: Apache-Coyote/1.1 Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Content-Encoding: gzip Date: Tue, 30 Mar 2010 06:13:52 GMT And problem is when I'm requesting server to send gzipped request it often sends not full response. I recieve response, see that last chunk recieved, but then after ungzipping I see that response is partial. I never seen such behavior with gzip turned off in request headers. So my question is: is it common tomcat issue? maybe one of it's mod which is doing compression? Or maybe it maybe some kind of proxy issue? I can't tell about versions of tomcat or what gzip mod they use, but feel free to ask, i'll try ask my service provider. Thanks.

    Read the article

  • Process spawned by exec-maven-plugin blocks the maven process

    - by Arnab Biswas
    I am trying to execute the following scenario using maven : pre-integration-phase : Start a java based application using a main class (using exec-maven-plugin) integration-phase : Run the integration test cases (using maven-failsafe-plugin) post-integration-phase: Stop the application gracefully (using exec-maven-plugin) Here is pom.xml snip: <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <id>launch-myApp</id> <phase>pre-integration-test</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>java</executable> <arguments> <argument>-DMY_APP_HOME=/usr/home/target/local</argument> <argument>-Djava.library.path=/usr/home/other/lib</argument> <argument>-classpath</argument> <classpath/> <argument>com.foo.MyApp</argument> </arguments> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.12</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> <configuration> <forkMode>always</forkMode> </configuration> </plugin> </plugins> If I execute mvn post-integration-test, my application is getting started as a child process of the maven process, but the application process is blocking the maven process from executing the integration tests which comes in the next phase. Later I found that there is a bug (or missing functionality?) in maven exec plugin, because of which the application process blocks the maven process. To address this issue, I have encapsulated the invocation of MyApp.java in a shell script and then appended “/dev/null 2&1 &” to spawn a separate background process. Here is the snip (this is just a snip and not the actual one) from runTest.sh: java - DMY_APP_HOME =$2 com.foo.MyApp > /dev/null 2>&1 & Although this solves my issue, is there any other way to do it? Am I missing any argument for exec-maven-plugin?

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Explain the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. //hello.S #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. $cpp hello.S hello.s //pre-processor $as hello.s -o hello.o //assemble $ld hello.o // linker : converting relocatable to executable Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Kindly change this program appropriately so that it can run correctly on this machine. EDIT: After reading Paul R's answer. I checked my files claws@claws-desktop:~$ file ./hello.o ./hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped claws@claws-desktop:~$ file ./hello ./hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), statically linked, not stripped All of my questions still hold true. What exactly is happening in this case? Can someone please answer my questions and provide an x86-64 version of this code?

    Read the article

  • Embedding wav files in AS3 Flash/Flex project?

    - by aaaidan
    The Flash IDE is capable of embedding many types of uncompressed sound files, including wav, and offers optional compression when publishing. However, the [Embed] tag, only seems to allow embedding of mp3 files. Is it truly impossible to embed an uncompressed wav file, or am I missing some magic, undocumented mimeType? I was hoping for something like: [Embed source="../../audio/wibble.wav" mimeType="audio/wav"] ...but I get no transcoder registered for mimeType 'audio/wav' It's possible to embed wav or other format as an octet-stream and parse at runtime, but that's pretty heavy handed I think. I'm surprised that even though the Flash IDE can embed uncompressed sound data, [Embed] cannot, given that the swf spec can contain uncompressed sound data. Any takers?

    Read the article

  • The text in a circle.

    - by kalininew
    Prompt how to solve a problem, please. There is a block with the text of the unknown sizes. It is necessary to spin (with pouring inside) round this block that the text was in a circle (without dependence from the sizes of the text). At change of the sizes of the text (at window compression) the circle should vary dynamically. It is necessary to make it in browser. From what to me to start to solve the given problem? Can be methods javascript (on a site it is connected jQuery)? P.S. The decision should be ???????????????.

    Read the article

  • Why does Google's Closure Compiler leave a few unnecessary spaces or line breaks?

    - by Bungle
    I've noticed that every time I use Google's Closure Compiler Service, it leaves a few unnecessary spaces in the compiled code presented on the right-hand side of the page. These correspond to line breaks in the hosted version of the compiled code. For example (note the line breaks, each of which seems unnecessary): http://troy.onespot.com/static/stack_overflow/closure_spaces.js To date, I've just been removing them manually, but I'm curious why they're there. Is it to limit the line length of the hosted version of the code to make it more readable? Could the compiler be smart enough to leave or insert those intentionally to maximize GZIP compression efforts? I know that they have a trivial effect on the file size, but with so much effort going into minifying every last byte in the source script, it's counterintuitive why they're there.

    Read the article

  • Conditional configuration in maven pom.xml

    - by David Zhao
    I'd like to ONLY exclude certain files in maven-war-plugin when property "skipCompress" set to true, I thought I could do something like this, but it doesn't work for me. BTW, I can't use profile to achieve this even I want to use skipCompress to turn on and off the compression in both development and deployment profiles. <plugin> <artifactId>maven-war-plugin</artifactId> <configuration> <if> <not> <equals arg1="${skipCompress}" arg2 = "true"/> </not> <then> <warSourceExcludes>**/external/dojo/**/*.js</warSourceExcludes> </then> </if> </configuration> </plugin> Thanks, David

    Read the article

  • IIS and Flash forceSmoothing issue

    - by Mark Kennerley
    Hi everyone, I am incorporating a Flash Flickr Polaroid file into my site, http://www.no3dfx.com/polaroid/ But I am having problems with the images being smooth. I have edited the code throughout with forceSmoothing = true and _quality = best. It all works and looks smooth if I test the file in the preview window and if I run the HTML file. But as soon as I put the file under IIS the smoothing stops. All my flash players are v10+ I have turned the IIS compression off but no luck. Can anyone please help with this? Thanks, Clyde

    Read the article

  • Compiling ruby1.9.1 hangs and fills swap!

    - by nfm
    I'm compiling Ruby 1.9.1-p376 under Ubuntu 8.04 server LTS (64-bit), by doing the following: $ ./configure $ make $ sudo make install ./configure works without complaints. make hangs indefinitely until all my RAM and swap is gone. It get stuck after the following output: compiling ripper make[1]: Entering directory `/tmp/ruby1.9.1/ruby-1.9.1-p376/ext/ripper' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O2 -g -Wall -Wno-parentheses -o ripper.o -c ripper.c If I run the gcc command by hand, with the -v argument to get verbose output, it hangs after the following: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) /usr/lib/gcc/x86_64-linux-gnu/4.2.4/cc1 -quiet -v -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H="extconf.h" ripper.c -quiet -dumpbase ripper.c -mtune=generic -auxbase-strip ripper.o -g -O2 -Wall -Wno-parentheses -version -fPIC -fstack-protector -fstack-protector -o /tmp/ccRzHvYH.s ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu" ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/4.2.4/../../../../x86_64-linux-gnu/include" ignoring nonexistent directory "/usr/include/x86_64-linux-gnu" ignoring duplicate directory "../.././ext/ripper" ignoring duplicate directory "../../." #include "..." search starts here: #include <...> search starts here: . ../../.ext/include/x86_64-linux ../.././include ../.. /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.2.4/include /usr/include End of search list. GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) (x86_64-linux-gnu) compiled by GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4). GGC heuristics: --param ggc-min-expand=47 --param ggc-min-heapsize=32795 Compiler executable checksum: 6e11fa7ca85fc28646173a91f2be2ea3 I just compiled ruby on another computer for reference, and it took about 10 seconds to print the following output (after the above Compiler executable checksum line): COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' as -V -Qy -o ripper.o /tmp/cca4fa7R.s GNU assembler version 2.20 (i486-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.20 COMPILER_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/ LIBRARY_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' I have absolutely no clue what could be going wrong here - any ideas where I should start?

    Read the article

  • Is there any simple way to test two PNGs for equality?

    - by Mason Wheeler
    I've got a bunch of PNG images, and I'm looking for a way to identify duplicates. By duplicates I mean, specifically, two PNG files whose uncompressed image data are identical, not necessarily whose files are identical. This means I can't do something simple like compare CRC hash values. I figure this can actually be done reliably since PNGs use lossless compression, but I'm worried about speed. I know I can winnow things down a little by testing for equal dimensions first, but when it comes time to actually compare the images against each other, is there any way to do it reasonably efficiently? (ie. faster than the "double-for-loop checking pixel values against each other" brute-force method?)

    Read the article

  • Running Sybase ISQL scripts from windows batch file

    - by user1328709
    I have already researched on this site as well as on google extensively for this. I have created a number of batch files that perform certain automated transactions(backups etc) on our production database. i want to further simplify my end of day processes by taking the dumps using a script that accepts input of some parameters. the script is able to login the isql prompt but unable to do the execution of the commands. @ECHO ***Started*** @ECHO Enter MonthDay(MMDD) SET /p md= @ECHO %md% mkdir \\10.20.1.17\arch\212%md%_banking set run=isql -Uuser -SORBITS -Ppass %run% @echo dump database banking to '/media/newArch/212%md%_banking/212%md%EOD_banking.dmp' with compression=5 @echo dump database master to '/media/newArch/212%md%_banking/212%md%EOD_master.dmp' @echo go pause I have been unsuccessful at putting these in a separate script file because the script itself uses a passed parameter. Please give me hints and links to Thanks

    Read the article

  • Build a gem with native extension (Gem::Installer::ExtensionBuildError)

    - by Arnaud Leymet
    I have the following configuration: uname -a : Linux 2.6.24.2 i686 GNU/Linux (Ubuntu) ruby -v : ruby 1.9.0 (2007-12-25 revision 14709) [i486-linux] rails -v : Rails 3.0.0.beta3 gem -v : 1.3.5 rake --version : rake, version 0.8.7 make -v : GNU Make 3.81 gem env : RUBYGEMS VERSION: 1.3.5 RUBY VERSION: 1.9.0 (2007-12-25 patchlevel 0) [i486-linux] INSTALLATION DIRECTORY: /usr/lib/ruby1.9/gems/1.9.0 RUBY EXECUTABLE: /usr/bin/ruby1.9 EXECUTABLE DIRECTORY: /usr/bin RUBYGEMS PLATFORMS: ruby x86-linux GEM PATHS: /usr/lib/ruby1.9/gems/1.9.0 /root/.gem/ruby/1.9.0 GEM CONFIGURATION: :update_sources = true :verbose = true :benchmark = false :backtrace = false :bulk_threshold = 1000 REMOTE SOURCES: http://gems.rubyforge.org/ And when I try this simple command: gem install nokogiri Here is what I get: # gem install nokogiri Building native extensions. This could take a while... ERROR: Error installing nokogiri: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for iconv.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxml/parser.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libxslt/xslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for libexslt/exslt.h in /opt/local/include/,/opt/local/include/libxml2,/opt/local/include,/opt/local/include,/opt/local/include/libxml2,/usr/local/include,/usr/local/include/libxml2,/usr/include,/usr/include/libxml2,/usr/include,/usr/include/libxml2... yes checking for xmlParseDoc() in -lxml2... yes checking for xsltParseStylesheetDoc() in -lxslt... yes checking for exsltFuncRegister() in -lexslt... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetParserStructuredErrors()... yes checking for xmlRelaxNGSetValidStructuredErrors()... yes checking for xmlSchemaSetValidStructuredErrors()... yes checking for xmlSchemaSetParserStructuredErrors()... yes creating Makefile make cc -I. -I/usr/include/libxml2 -I/usr/include -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETPARSERSTRUCTUREDERRORS -DHAVE_XMLRELAXNGSETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETVALIDSTRUCTUREDERRORS -DHAVE_XMLSCHEMASETPARSERSTRUCTUREDERRORS -I/opt/local/include/ -I/opt/local/include/libxml2 -I/opt/local/include -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -g -DXP_UNIX -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -o xml_document_fragment.o -c xml_document_fragment.c In the included file starting at ./nokogiri.h:75, From ./xml_document_fragment.h:4, From xml_document_fragment.c:1: ./xml_document.h:5:16: error: st.h : No file or folder with this type make: *** [xml_document_fragment.o] Error 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/nokogiri-1.4.1/ext/nokogiri/gem_make.out The "gem_make.out" file contains the exact same information as described above. If I try with another gem: gem install gherkin Here is what I get: u# gem install gherkin Building native extensions. This could take a while... ERROR: Error installing gherkin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9 extconf.rb checking for main() in -lc... yes creating Makefile make cc -I. -I/usr/include/ruby-1.9.0/i486-linux -I/usr/include/ruby-1.9.0 -I. -D_FILE_OFFSET_BITS=64 -fPIC -fno-strict-aliasing -g -fPIC -o gherkin_lexer_ar.o -c gherkin_lexer_ar.c /Users/aslakhellesoy/scm/gherkin/tasks/../ragel/i18n/ar.c.rl:11:16: erreur: re.h : Aucun fichier ou dossier de ce type make: *** [gherkin_lexer_ar.o] Erreur 1 Gem files will remain installed in /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30 for inspection. Results logged to /usr/lib/ruby1.9/gems/1.9.0/gems/gherkin-1.0.30/ext/gherkin_lexer_ar/gem_make.out In fact whenever I try to install a gem with native extension, I get the same type of error. Would that ring a bell to anyone?

    Read the article

  • YUI Compressor + gzip causes Illegal Character error in jQuery

    - by lo_fye
    When I minify jquery using YUI compressor, it works fine. When I then add gzip compression (and serve this version via mod rewrite), the gzipped version throws this error: illegal character in jquery.min.js on line 1 Line 1 is: ???????M???????????s?8?0???!sz?dKr?=? This results in a "jquery is not defined" error. I am using the following rewrite rules to serve up the gzipped versions: #Check to see if browser can accept gzip files. ReWriteCond %{HTTP:accept-encoding} (gzip.*) #make sure there's no trailing .gz on the url ReWriteCond %{REQUEST_FILENAME} !^.+\.gz$ #check to see if a .gz version of the file exists. RewriteCond %{REQUEST_FILENAME}.gz -f #All conditions met so add .gz to URL filename (invisibly) RewriteRule ^(.+) $1.gz [L] I can't find any references to this happening to anyone else. Thoughts?

    Read the article

  • git: Is it possible to save the packed objects of a dry run and push them later?

    - by shovavnik
    I'm trying to push a bunch of commits that contain a lot of code and a few thousand MP3 and PDF files besides (ranging from 5-40 MB each). Git successfully packs the objects: C:\MyProject> git push Counting objects: 7582, done. Delta compression using up to 2 threads. Compressing objects: 100% (7510/7510), done. But it fails to send the push for some as yet unknown reason. The problem is that it takes it a very long time to repack the files (I'm on a battery-powered laptop and it took about 20 minutes to pack). So I guess my question can be phrases thus: Is it possible to save the packed objects created in a dry run? Once saved, is it possible to push those packed objects and avoid repacking? I looked it up in the git manual and elsewhere and couldn't find anything conclusive. Any help or pointers are appreciated.

    Read the article

  • Image coding library

    - by Dmitry
    Is there any good library for lossless image encoding/decoding that has compression rate more or less similar to PNG but decoding to raw RGB bitmap data would be much faster than PNG? Also alpha transparency is needed, but not essential because, alpha channel could be taken from separate image. Original problem lies in slowness of reading and decoding PNG files on iPhone using standard libraries. Obvious and the simples solution would have been storing raw RGB bitmap data, but then size of unpacked ipa is too large - 4 times larger than PNG files. So, I am trying to find some compromise solution.

    Read the article

  • ZIPLIB problem on opening zip files

    - by Ahmet vardar
    I am using this class to create zip <?php // vim: expandtab sw=4 ts=4 sts=4: class zipfile { var $datasec = array(); var $ctrl_dir = array(); var $eof_ctrl_dir = "\x50\x4b\x05\x06\x00\x00\x00\x00"; var $old_offset = 0; function unix2DosTime($unixtime = 0) { $timearray = ($unixtime == 0) ? getdate() : getdate($unixtime); if ($timearray['year'] < 1980) { $timearray['year'] = 1980; $timearray['mon'] = 1; $timearray['mday'] = 1; $timearray['hours'] = 0; $timearray['minutes'] = 0; $timearray['seconds'] = 0; } // end if return (($timearray['year'] - 1980) << 25) | ($timearray['mon'] << 21) | ($timearray['mday'] << 16) | ($timearray['hours'] << 11) | ($timearray['minutes'] << 5) | ($timearray['seconds'] >> 1); } // end of the 'unix2DosTime()' method function addFile($data, $name, $time = 0) { $name = str_replace('\\', '/', $name); $dtime = dechex($this->unix2DosTime($time)); $hexdtime = '\x' . $dtime[6] . $dtime[7] . '\x' . $dtime[4] . $dtime[5] . '\x' . $dtime[2] . $dtime[3] . '\x' . $dtime[0] . $dtime[1]; eval('$hexdtime = "' . $hexdtime . '";'); $fr = "\x50\x4b\x03\x04"; $fr .= "\x14\x00"; // ver needed to extract $fr .= "\x00\x00"; // gen purpose bit flag $fr .= "\x08\x00"; // compression method $fr .= $hexdtime; // last mod time and date // "local file header" segment $unc_len = strlen($data); $crc = crc32($data); $zdata = gzcompress($data); $zdata = substr(substr($zdata, 0, strlen($zdata) - 4), 2); // fix crc bug $c_len = strlen($zdata); $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize $fr .= pack('v', strlen($name)); // length of filename $fr .= pack('v', 0); // extra field length $fr .= $name; // "file data" segment $fr .= $zdata; // "data descriptor" segment (optional but necessary if archive is not // served as file) $fr .= pack('V', $crc); // crc32 $fr .= pack('V', $c_len); // compressed filesize $fr .= pack('V', $unc_len); // uncompressed filesize // add this entry to array $this -> datasec[] = $fr; // now add to central directory record $cdrec = "\x50\x4b\x01\x02"; $cdrec .= "\x00\x00"; // version made by $cdrec .= "\x14\x00"; // version needed to extract $cdrec .= "\x00\x00"; // gen purpose bit flag $cdrec .= "\x08\x00"; // compression method $cdrec .= $hexdtime; // last mod time & date $cdrec .= pack('V', $crc); // crc32 $cdrec .= pack('V', $c_len); // compressed filesize $cdrec .= pack('V', $unc_len); // uncompressed filesize $cdrec .= pack('v', strlen($name) ); // length of filename $cdrec .= pack('v', 0 ); // extra field length $cdrec .= pack('v', 0 ); // file comment length $cdrec .= pack('v', 0 ); // disk number start $cdrec .= pack('v', 0 ); // internal file attributes $cdrec .= pack('V', 32 ); // external file attributes - 'archive' bit set $cdrec .= pack('V', $this -> old_offset ); // relative offset of local header $this -> old_offset += strlen($fr); $cdrec .= $name; // optional extra field, file comment goes here // save to central directory $this -> ctrl_dir[] = $cdrec; } // end of the 'addFile()' method function file() { $data = implode('', $this -> datasec); $ctrldir = implode('', $this -> ctrl_dir); return $data . $ctrldir . $this -> eof_ctrl_dir . pack('v', sizeof($this -> ctrl_dir)) . // total # of entries "on this disk" pack('v', sizeof($this -> ctrl_dir)) . // total # of entries overall pack('V', strlen($ctrldir)) . // size of central dir pack('V', strlen($data)) . // offset to start of central dir "\x00\x00"; // .zip file comment length } // end of the 'file()' method function addFiles($files ) { foreach($files as $file) { if (is_file($file)) //directory check { $data = implode("",file($file)); $this->addFile($data,$file); } } } function output($file) { $fp=fopen($file,"w"); fwrite($fp,$this->file()); fclose($fp); } } // end of the 'zipfile' class ?> It creates zip file but when i try to open it on Mac os x snow leopard and windows 7, it doesnt open. on mac i had this error: Error 1: operation not permitted Any idea ? thanks

    Read the article

  • Fixing warning from git

    - by japancheese
    I've been doing a workflow of making a git repository on a remote central repository, cloning that repo on my local dev machine, doing some work, and then pushing the changes back to the same repo on the remote server. However, and I believe this was after an update I did to git recently, after pushing up a change, I'm getting the following warning: Counting objects: 2724, done. Delta compression using up to 2 threads. Compressing objects: 100% (2666/2666), done. Writing objects: 100% (2723/2723), 5.90 MiB | 313 KiB/s, done. Total 2723 (delta 219), reused 0 (delta 0) warning: updating the currently checked out branch; this may cause confusion, as the index and working tree do not reflect changes that are now in HEAD. Can someone explain to me exactly what this warning means, and what I'm doing wrong in my workflow to not receive this warning?

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >