Search Results

Search found 3296 results on 132 pages for 'executable compression'.

Page 19/132 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • The Eclipse Executable Launcher Was Unable To Locate Its Companion Shared Library

    - by Caden Ratcliff
    I am a java beginner and am going through the book "Java In 24 Hours Sixth Edition" by Rogers Cadenhead. I am on chapter 24 "Creating an Android app". The book has told me to download Eclipse and Android SDK on my computer and to unzip the zip files they come in. I have put them both in the same parent file (C:\Documents and Settings\Administrator\My Documents\Downloads). Then the book tells me I can run Eclipse now and I get a message instead saying "The Eclipse Executable Launcher Was Unable To Locate Its Companion Shared Library". Is there a way to fix this? Do I just have to re-install SDK and Eclipse? I have downloaded SDK for windows 64-bit and I have windows 64-bit. Any info on how to fix this would be most appreciated.

    Read the article

  • Single line command to archive a certain amount of folders

    - by EarthMind
    I'm looking for a single line command that archives 5 folders into .tar files (no gzip/bzip needed) in a certain directory and deletes the folders after a successful compression. It has to use the original folder name as file name for the archive too. So far I've used the current command, which only does one directory per time: tar -c directory > directory.tar && rm -rf directory Thanks in advance.

    Read the article

  • Experiences with eXdupe?

    - by ewwhite
    I noticed that the eXdupe compression/archiving/deduplication utility was recently mentioned in another post here. It boasts some interesting features, and I've been playing with it for the past day. It's basically a cross-platform, highly multithreaded archival tool. http://exdupe.com/index.html I'm curious if anyone here uses it in production or has any tips on how to leverage the tool in their environment. I'm looking for suggestions.

    Read the article

  • Prevent Apache from chunking gzipped content

    - by Bear
    When using mod_deflate in Apache2, Apache will chunk gzipped content, setting the Transfer-encoding: chunked header. While this results in a faster download time, I cannot display a progress bar. If I handle the compression myself in PHP, I can gzip it completely first and set the Content-length header, so that I can display a progress bar to the user. Is there any way to change Apache's behavior, and have Apache set a Content-length header instead of chunking the response, so that I don't have to handle the compression myself?

    Read the article

  • Block upload of executable images (PHP)

    - by James Simpson
    It has come to my attention that a user has been trying to create an exploit through avatar image uploads. This was discovered when a user reported to me that they were getting a notice from their Norton Anti-virus saying "HTTP Suspicious Executable Image Download." This warning was referencing the user's avatar image. I don't think they had actually achieved anything in the way of stealing information or anything like that, but I assume it could be possible if the hole is left open long enough. I use PHP to upload the image files, and I check if the file being uploaded is a png, jpg, or gif.

    Read the article

  • Building ffmpeg with an executable output

    - by Kevin Galligan
    I generally don't like to ask such "you figure it out for me" questions, but I suspect this one will be really simple for a C++ guru. I want to build ffmpeg for Android, and I'd like it to output an executable rather than a set of libraries. We've been using the guardian project's build: https://github.com/guardianproject/android-ffmpeg It does produce what we want, but I've found tweaking it for different architectures to be, at best, unpleasant. I've gotten this version to build: https://github.com/appunite/AndroidFFmpeg It does a nice job of slicing and dicing different architectures, but produces a jni version. There is a long story as to why I want the exe, but I'll skip it for now. Is there a flag that needs to be flipped? Some path or other setting? I am at this point fully baffled. Thanks in advance.

    Read the article

  • Choosing sound codec for network (c/c++)

    - by ergey
    Hi, I'am imnplementing real-time sound (voice) recording and transferring to another point in my application. I would ask who knows good codec or compression algorithm to transfer low-quality (max 32 kbit) sound data through the network. I know skype uses good compression, what codec use it? Thank you.

    Read the article

  • Please help! request compression

    - by Naor
    Hi, I wrote an IHttpModule that compress my respone using gzip (I return a lot of data) in order to reduce response size. It is working great as long as the web service doesn't throws an exception. In case exception is thrown, the exception gzipped but the Content-encoding header is disappear and the client doesn't know to read the exception. How can I solve this? Why the header is missing? I need to get the exception in the client. Here is the module: public class JsonCompressionModule : IHttpModule { public JsonCompressionModule() { } public void Dispose() { } public void Init(HttpApplication app) { app.BeginRequest += new EventHandler(Compress); } private void Compress(object sender, EventArgs e) { HttpApplication app = (HttpApplication)sender; HttpRequest request = app.Request; HttpResponse response = app.Response; try { //Ajax Web Service request is always starts with application/json if (request.ContentType.ToLower(CultureInfo.InvariantCulture).StartsWith("application/json")) { //User may be using an older version of IE which does not support compression, so skip those if (!((request.Browser.IsBrowser("IE")) && (request.Browser.MajorVersion <= 6))) { string acceptEncoding = request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(acceptEncoding)) { acceptEncoding = acceptEncoding.ToLower(CultureInfo.InvariantCulture); if (acceptEncoding.Contains("gzip")) { response.AddHeader("Content-encoding", "gzip"); response.Filter = new GZipStream(response.Filter, CompressionMode.Compress); } else if (acceptEncoding.Contains("deflate")) { response.AddHeader("Content-encoding", "deflate"); response.Filter = new DeflateStream(response.Filter, CompressionMode.Compress); } } } } } catch (Exception ex) { int i = 4; } } } Here is the web service: [WebMethod] public void DoSomething() { throw new Exception("This message get currupted on the client because the client doesn't know it gzipped."); } I appriciate any help. Thanks!

    Read the article

  • Image Resizing and Compression

    - by GSTAR
    Hi guys, I'm implementing an image upload facility for my website. The uploading facility is complete but what I'm working on at the moment is manipulating the images. For this task I am using PHPThumb (http://phpthumb.gxdlabs.com). Anyway as I go along I'm coming across potential issues, to do with resizing and compression. Basically I want to acheive the following results: The ideal image dimensions are: 800px width, 600px height. If an uploaded image exceeds either of these dimensions, it will need be resized to meet the requirements. Otherwise, leave as it is. The ideal file size is 200kb. If an uploaded image exceeds this then it will need to be compressed to meet this requirement. Otherwise, leave as it is. So in a nutshell: 1) Check the dimensions, resize if required. 2) Check the filesize, compress if required. Has anybody done anything like this / could you give me some pointers? Is PHPThumb the correct tool to do this in?

    Read the article

  • I/O between AIR client using Native process and executable java .jar

    - by aseem behl
    I am using Adobe AIR 2.0 native process API to launch a java executable jar. I/O is handled by writing to the input stream of the java process and reading from the output stream. The application is event based where several events are fired from the server. We catch these events in java code, handle them and write the output to the outputstream using the synchronized static method below. public class ReaderWriter { static Logger logger = Logger.getLogger(ReaderWriter.class); public synchronized static void writeToAir(String output){ try{ byte[] byteArray = output.getBytes(); DataOutputStream dataOutputStream = new DataOutputStream(System.out); dataOutputStream.write(byteArray); dataOutputStream.flush(); } catch (IOException e) { logger.info("Exception while writing the output. " + e); } } } The issue is that some messages are lost between the transfer and not all messages reach the AIR client. If I run the java application from the console I am receiving all the messages. It would be great if somebody could point out what I am missing. Following are some of the listeners used to send the event data to the AIR client. // class used to process Shutdown events from the Session private class SessionShutdownListener implements SessionListener{ public void onEvent(Event e) { Session.Shutdown sd = (Session.Shutdown) e; Session.ShutdownReason sr = sd.getReason(); String eventOutput = "eo;" + "Session Shutdown event ocurred reason=" + sr.strValue() + "\n"; ReaderWriter.writeToAir(eventOutput); } } // class used to process OperationSucceeded events from the Session private class SessionOperationSucceededListener implements SessionListener{ public void onEvent(Event e) { Session.OperationSucceeded os = (Session.OperationSucceeded) e; String eventOutput = "eo;" + "Session OperationSucceeded event ocurred" + "\n"; ReaderWriter.writeToAir(eventOutput); } }

    Read the article

  • Sourcing a script file in bash before starting an executable

    - by abigagli
    Hi, I'm trying to write a bash script that "wraps" whatever the user wants to invoke (and its parameters) sourcing a fixed file just before actually invoking it. To clarify: I have a "ConfigureMyEnvironment.bash" script that must be sourced before starting certain executables, so I'd like to have a "LaunchInMyEnvironment.bash" script that you can use as in: LaunchInMyEnvironment <whatever_executable_i_want_to_wrap> arg0 arg1 arg2 I tried the following LaunchInMyEnvironment.bash: #!/usr/bin/bash launchee="$@" if [ -e ConfigureMyEnvironment.bash ]; then source ConfigureMyEnvironment.bash; fi exec "$launchee" where I have to use the "launchee" variable to save the $@ var because after executing source, $@ becomes empty. Anyway, this doesn't work and fails as follows: myhost $ LaunchInMyEnvironment my_executable -h myhost $ /home/me/LaunchInMyEnvironment.bash: line 7: /home/bin/my_executable -h: No such file or directory myhost $ /home/me/LaunchInMyEnvironment.bash: line 7: exec: /home/bin/my_executable -h: cannot execute: No such file or directory That is, it seems like the "-h" parameter is being seen as part of the executable filename and not as a parameter... But it doesn't really make sense to me. I tried also to use $* instead of $@, but with no better outcoume. What I'm doing wrong? Andrea.

    Read the article

  • Get current PHP executable from within script?

    - by benizi
    I want to run a PHP cli program from within PHP cli. On some machines where this will run, both php4 and php5 are installed. If I run the outer program as php5 outer.php I want the inner script to be run with the same php version. In Perl, I would use $^X to get the perl executable. It appears there's no such variable in PHP? Right now, I'm using $_SERVER['_'], because bash (and zsh) set the environment variable $_ to the last-run program. But, I'd rather not rely on a shell-specific idiom. UPDATE: Version differences are but one problem. If PHP isn't in PATH, for example, or isn't the first version found in PATH, the suggestions to find the version information won't help. Additionally, csh and variants appear to not set the $_ environment variable for their processes, so the workaround isn't applicable there. UPDATE 2: I was using $_SERVER['_'], until I discovered that it doesn't do the right thing under xargs (which makes sense... zsh sets it to the command it ran, which is xargs, not php5, and xargs doesn't change the variable). Falling back to using: $version = explode('.', phpversion()); $phpcli = "php{$version[0]}";

    Read the article

  • Working around "one executable per project" in Visual C# for many small test programs

    - by Kevin Ivarsen
    When working with Visual Studio in general (or Visual C# Express in my particular case), it looks like each project can be configured to produce only one output - e.g. a single executable or a library. I'm working on a project that consists of a shared library and a few application, and I already have one project in my solution for each of those. However, during development I find it useful to write small example programs that can run one small subsystem in isolation (at a level that doesn't belong in the unit tests). Is there a good way to handle this in Visual Studio? I'd like to avoid adding several dozen separate projects to my solution for each small test program I write, especially when these programs will typically be less than 100 lines of code. I'm hoping to find something that lets me continue to work in Visual Studio and use its build system (rather than moving to something like NAnt). I could foresee the answer being something like: A way of setting this up in Visual Studio that I haven't found yet A GUI like NUnit's graphical runner that searches an assembly for classes with defined Main() functions that you can select and run A command line tool that lets you specify an assembly and a class with a Main function to run

    Read the article

  • Error when linking C executable to OpenCV

    - by Ghilas BELHADJ
    I'm compiling OpenCV under Ubuntu 13.10 using cMake. i've already compiled c++ programs and they works well. now i'm trying to compile a C file using this cMakeLists.txt cmake_minimum_required (VERSION 2.8) project (hello) find_package (OpenCV REQUIRED) add_executable (hello src/test.c) target_link_libraries (hello ${OpenCV_LIBS}) here is the test.c file: #include <stdio.h> #include <stdlib.h> #include <opencv/highgui.h> int main (int argc, char* argv[]) { IplImage* img = NULL; const char* window_title = "Hello, OpenCV!"; if (argc < 2) { fprintf (stderr, "usage: %s IMAGE\n", argv[0]); return EXIT_FAILURE; } img = cvLoadImage(argv[1], CV_LOAD_IMAGE_UNCHANGED); if (img == NULL) { fprintf (stderr, "couldn't open image file: %s\n", argv[1]); return EXIT_FAILURE; } cvNamedWindow (window_title, CV_WINDOW_AUTOSIZE); cvShowImage (window_title, img); cvWaitKey(0); cvDestroyAllWindows(); cvReleaseImage(&img); return EXIT_SUCCESS; } it returns me this error whene running cmake . then make to the project: Linking C executable hello /usr/bin/ld: CMakeFiles/hello.dir/src/test.c.o: undefined reference to symbol «lrint@@GLIBC_2.1» /lib/i386-linux-gnu/libm.so.6: error adding symbols: DSO missing from command line collect2: error: ld returned 1 exit status make[2]: *** [hello] Erreur 1 make[1]: *** [CMakeFiles/hello.dir/all] Erreur 2 make: *** [all] Erreur 2

    Read the article

  • Modifying resource contents of a running executable

    - by mrwoik
    All, I store my application settings in a resource. When my program first loads, I read the specified resource using WinAPI. I then parse the retrieved byte data. This works flawlessly for me. Now let's say that a user alters a setting in my application. He/she checks a checkbox control. I would like to save the updated setting to my resource. However, it seems that my call to UpdateResource will not work while my application is running. I can't modify my resource data even though it is the same size. First, is it possible to modify a running image's resource data? Second, if that is not possible, what alternatives do I have for storing settings internally within my application? NOTE: I must have the settings within my running executable. They cannot be on the harddrive or in the registry. Please don't even suggest that as an option.

    Read the article

  • creating executable jar file for my java application

    - by Manu
    public class createExcel { public void write() throws IOException, WriteException { WorkbookSettings wbSettings = new WorkbookSettings(); wbSettings.setLocale(new Locale("en", "EN")); WritableWorkbook workbook1 =Workbook.createWorkbook(new File(file), wbSettings); workbook1.createSheet("Niru ", 0); WritableSheet excelSheet = workbook1.getSheet(0); createLabel(excelSheet); createContent(excelSheet,list); workbook1.write(); workbook1.close(); } public void createLabel(WritableSheet sheet)throws WriteException { WritableFont times10pt = new WritableFont(WritableFont.createFont("D:\font\trebuct"),8); // Define the cell format times = new WritableCellFormat(times10pt); // Lets automatically wrap the cells times.setWrap(false); WritableFont times10ptBoldUnderline = new WritableFont( WritableFont.createFont("D:\font\trebuct"), 9, WritableFont.BOLD, false, UnderlineStyle.NO_UNDERLINE); timesBoldUnderline = new WritableCellFormat(times10ptBoldUnderline); sheet.setColumnView(0,15); sheet.setColumnView(1,13); // Write a few headers addCaption(sheet, 0, 0, "Business Date"); addCaption(sheet, 1, 0, "Dealer ID"); } private void createContent(WritableSheet sheet, ArrayList list) throws WriteException,RowsExceededException { // Write a few number for (int i = 1; i < 11; i++) { for(int j=0;j<11;j++){ // First column addNumber(sheet, i, j,1); // Second column addNumber(sheet, 1, i, i * i); } } } private void addCaption(WritableSheet sheet, int column, int row, String s) throws RowsExceededException, WriteException { Label label; label = new Label(column, row, s, timesBoldUnderline); sheet.addCell(label); } private void addNumber(WritableSheet sheet, int row,int column, Integer integer) throws WriteException, RowsExceededException { Number number; number = new Number(column,row, integer, times); sheet.addCell(number); } public static void main(String[] args) { JButton myButton0 = new JButton("Advice_Report"); JButton myButton1 = new JButton("Position_Report"); JPanel bottomPanel = new JPanel(); bottomPanel.add(myButton0); bottomPanel.add(myButton1); myButton0.addActionListener(this); myButton1.addActionListener(this); createExcel obj=new createExcel(); obj.setOutputFile("c;\\temp\\swings\\jack.xls"); try{ obj.write(); }catch(Exception e){} } and so on. it working fine. i have jxl.jar and ojdbc14.jar files(need this jar file for Excelsheet creation and DB connection )and createExcel.class(.class file) file. how to make this code as executable jar file.

    Read the article

  • Mesa library vs Hardware accelerated OpenGL for my executable - it's just a linking problem?

    - by user827992
    Supposing that i have my program that is targeting a specific OpenGL version, let's say the 3.0, now i want to produce an executable that will support the software rendering with Mesa and another executable that will support the Hardware accelerated context, i can use the same source code for both without expecting any issues ? In another words, the instrunctions in this libraries are the same for my linking purpose ?

    Read the article

  • Why is IIS7 not compressing my static files?

    - by Peter Evjan
    I am trying to get IIS to compress jquery.js (and all other static files, but using jquery as the example here) on my localhost, but something goes wrong. The funny part is that when I look in my %SystemDrive%\inetpub\temp\IIS Temporary Compressed Files\MySiteName, I see the jquery.js file there, and its size is 24 KB. But in the browser, according to the Net tab on Firebug, the size is 69 kb. I've tried the following: - Checked that my browser accept compression. I found "Accept-Encoding gzip, deflate" in the request header via Firebug - Enabling Failed Request Tracing. Nothing turns up in the %SystemDrive%\inetpub\logs\FailedReqLogFiles folder after I do my request though.

    Read the article

  • For large files compress first then transfer or rsync -z? which would be fastest?

    I have a ton of relativity small data files but they take up about 50 GB and I need them transferred to a different machine. I was trying to think of the most efficient way to do this. Thoughts I had were to gzip the whole thing then rsync it and decompress it, rely on rsync -z for compression, gzip then use rsync -z. I am not sure which would be most efficient since I am not sure how exactly rsync -z is implemented. Any ideas on which option would be the fastest?

    Read the article

  • What do the numbers 240 and 360 mean when downloading video? How can I tell which video is more compressed?

    - by DaMing
    I have downloaded some computer science lectures from YouTube recently. There is usually more than one choice of file size and file format to download. I noticed that for the same video, the downloadable one with FLV 240 extension is larger than another one with MPEG4 360 extension. What does the number (240 and 360) mean? And which file's compression rate is bigger? That is to say, which one removed much more file elements than the other from the orignal file?

    Read the article

  • Can Apache 2 be configured to start sending gzipped data early?

    - by rikh
    We have Apache set up to gzip compress html pages before they are sent to the client browser. However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser. There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly. Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this? If not, can you suggest any other way to get gzip compression working for the server? The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.

    Read the article

  • Scanned JPEGs are large and slow to load - can they be optimized losslessly?

    - by Alistair Knock
    I have hundreds of JPEG photographs which were scanned about 5 years ago from negative using a Konica Minolta DiMAGE Scan Dual IV. The dimensions are ~4500x3000, and the filesize is around 12Mb, compared to shots from a DSLR with dimensions of 3000x2300 and filesize of 2-4Mb (actually, these are the output from a RAW convertor). The filesize is obviously quite a big difference, but the issue that's bothering me is that the (perceived) loading time is at least 10 times slower. Is this size/speed discrepancy likely to be because the scanner software saved the JPEGs inefficiently / using an old compression format, or is it simply that the scanned negatives contain much more "detail" (in the form of grain/noise) than the digital images? If the former, is there a way to losslessly optimize them? I've tried re-exporting the scanned files to full size JPEG from my RAW software but the filesize is pretty much the same. Both files will have been saved at 100 quality.

    Read the article

  • To decide where to crop an image, how can I highlight it's most compressible areas?

    - by Umber Ferrule
    I'm looking to get the most compression out of each of the most popular image formats, such as, JPEG, PNG, GIF, etc. Ideally, this would be a tool, or a series of transforms that could be performed (perhaps using a macro and then discarded) in popular image editors (Paint.NET/PaintShopPro/PhotoShop/GIMP) to highlight areas which will compress less. Alternatively, what rules of thumb can be used other than reducing colours (for PNG/GIF), reducing image dimensions, avoiding high detail areas... I'm not asking for help deciding what format to use for a particular image type as I think this is fairly common knowledge, i.e. diagram and images with few colours = PNG/GIF, photographs = JPEG.

    Read the article

  • Should you archive documents before backing up to the cloud?

    - by gabbsmo
    I'm planning to add a cloud storage to my personal backup strategy. But now I wonder if it really is worth the trouble of compressing my documents and photos. The Open XML-format already have zip-compression and JPEG is a lossy image format. So there really isn't much benefit in compressing. 20Mb of documents become about 17Mb at the ULTRA preset of 7-zip. One benefit I can imagine is that you can shorten upload time by archiving the folders since it minimizes the number of requests that is needed to be sent to the server at upload and download. So what are your thoughts and experience in this issue? Should you archive your documents before backing them up to the cloud?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >