Search Results

Search found 6814 results on 273 pages for 'payment processing'.

Page 33/273 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Paint.NET equivalent for Linux?

    - by Macha
    On Windows, my favorite image editor is Paint.NET. However, on Linux, GIMP is as unfriendly as photoshop despite having less features. (i.e. It takes ages to load, their's far too much stuff). Most of my image editing is simple things where Photoshop or GIMP would be overkill. Paint.NET does not run on Wine or Mono. So is there any similar fast and simple but powerful image editor available for Linux? EDIT: There is a Mono version available, but I don't want to have to deal with installing svn versions of mono, and compiling the version myself.

    Read the article

  • Does anybody wish to help a poor researcher publish a paper?

    - by Mihai Todor
    I don't know if this is a good place to beg for help, but here it goes: basically, I need to run a secure recommender system simulation (C++ console application) in order to meet tonight's deadline, and the faculty's server grid decided to go offline. I could really use something like 10+ (actually, about 16 would be required to meet the deadline) virtual instances of some Linux that has GMP installed... Ideally, they should all have the same specs, because a part of the simulation will represent performance benchmarks. If my question is inappropriate in any way, I kindly ask the administrators to remove it.

    Read the article

  • How does Deep Zoom (Microsoft Seadragon) work?

    - by Norman Ramsey
    I saw an incredible on Photosynth and Seadragon, some "deep zoom" technology recently acquired by Microsoft. There's also a less responsive version on the web. I'd really like to know how the technology works. Blog entries and web pages welcome, but the ideal answer will include one or more references to published papers in the professional literature.

    Read the article

  • Image Classification - Detecting an image is cartoon-like

    - by kingb
    I have a large amount of jpeg thumbnail images ranging in size from 120x90 to 320x240 and I would like to classify them as either Real Life-like or Cartoon-like. Are there any applications that will have cartoon classification capabilities? This application should work on Linux, and should take an image path on the command-line and return either 0 or 1 (echo $?).

    Read the article

  • How to convert poor quality bitmap image to vector?

    - by Macha
    I'm designing a website for a group which has lost the original digital image for their logo. The only file they have of it is a jpg which was embedded into a word document. The image has everything possible wrong with it: Anti-aliased onto a white background where it should be transparent Image artefacts Resized downwards poorly. Lines that should be straight and solid aren't. I've currently used the wand tool to get rid of the white background, and stuck it on the website, but it's poor quality makes it stick out like a sore thumb. I need a few different sizes of it to use, so how would I go about creating a vector image based on it?

    Read the article

  • MS Paint for Gnome

    - by flybywire
    I want an MS Paint like program for GNOME. GIMP is too much for me. I find it very frustrating for the simple tasks I do (adding text, arrows and circles to screenshots to highlight different features of a program).

    Read the article

  • How to get a good clean newspaper scan?

    - by itsadok
    I tried a few times to scan newspaper articles, but the images I got were always blotchy and with bad colors (sort of like this). Sometimes I see some really good scans, like this. What is the trick to get such good results? Do I need some high-quality scanner, or do I need some good photoshop filters? If there was something I could do using free tools it would be awesome.

    Read the article

  • How to resize images without loss in quality ?

    - by qWolf
    Hi everybody. I have a image JPEG with resolution is 2010 x 1080 (Pixels). And now, I want resize this image to 338 x 450 without losing image quality, how can I do ? (for ex: After I zoom in image, its quality was not good! ) Which software I can use ? Let me know about a solution. Thanks for all responses ;) Update / Clarification: My purpose is have a page contains image. This image have size is 2010x1080 (in original). First, image display on web page with size is 338x450. Then, to see image more clear, I want to zoom in. But when I do this, image quality was lost. Now, i don't know how to execute this problem :(

    Read the article

  • Batch movie frame extractor?

    - by yegor
    Can anyone suggest a windows applications that will extract x amount of frames from a list of movies that are imported into it. It needs to operate in batch mode. Image Grabber II .net would be perfect... but it wont work under Vista or Windows 7 (64bit) for me.... so Im looking for an alternative.

    Read the article

  • Scanned JPEGs are large and slow to load - can they be optimized losslessly?

    - by Alistair Knock
    I have hundreds of JPEG photographs which were scanned about 5 years ago from negative using a Konica Minolta DiMAGE Scan Dual IV. The dimensions are ~4500x3000, and the filesize is around 12Mb, compared to shots from a DSLR with dimensions of 3000x2300 and filesize of 2-4Mb (actually, these are the output from a RAW convertor). The filesize is obviously quite a big difference, but the issue that's bothering me is that the (perceived) loading time is at least 10 times slower. Is this size/speed discrepancy likely to be because the scanner software saved the JPEGs inefficiently / using an old compression format, or is it simply that the scanned negatives contain much more "detail" (in the form of grain/noise) than the digital images? If the former, is there a way to losslessly optimize them? I've tried re-exporting the scanned files to full size JPEG from my RAW software but the filesize is pretty much the same. Both files will have been saved at 100 quality.

    Read the article

  • Can anyone recommend a decent tool for optimizing images other than Photoshop

    - by toomanyairmiles
    Can anyone recommend a decent tool for optimising images other than adobe photoshop, the gimp etc? I'm looking to optimise images for the web preferably online and free. Basically I have a client who can't install additional software on their work PC but needs to optimise photographs and other images for their website and is presently uploading 1 or 2 Mb files. On a personal level I'm interested to see what other people are using...

    Read the article

  • How can I boost the volume of my Video

    - by Sunny Shah.
    I have a video that I need to pass on to some of my friends but it has very low audio volume. How can I boost the audio volume in this video so that it has a similar level as my other videos? Is there a video converter that can boost the audio volume?

    Read the article

  • netcat as a multithread server

    - by etuardu
    Hello, I use netcat to run a simple server like this: while true; do nc -l -p 2468 -e ./my_exe; done This way, anyone is able to connect to my host on port 2468 and talk with "my_exe". Unfortunately, if someone else wants to connect during an open session, it would get a "Connection refused" error, because netcat is no longer in listening until the next "while" loop. Is there a way to make netcat behave like a multithread server, i.e. always in listening for incoming connections? If not, are there some workarounds for this? Thank you all!

    Read the article

  • X11 for apache user

    - by fuenfundachtzig
    We are using inkscape to convert SVG images uploaded to our server via a web form. For this inkscape offers a batch mode via the -z option, but this batch mode has a flaw: When inkscape is run by the apache user, it breaks saying $ inkscape -z -W drawing.svg X11 connection rejected because of wrong authentication. The application 'inkscape' lost its connection to the display localhost:11.0; most likely the X server was shut down or you killed/destroyed the application. If you do the same as a normal user you also get errors: Xlib: connection to "localhost:11.0" refused by server Xlib: PuTTY X11 proxy: MIT-MAGIC-COOKIE-1 data did not match (inkscape:24050): Gdk-CRITICAL **: gdk_display_list_devices: assertion `GDK_IS_DISPLAY (display)' failed 301.27942 But at least inkscape gives the correct answer (here the number stating the width of the image). Does somebody know how to make this also work for the apache user? Does it make sense to authorize apache to use X (if so how)? In any case it doesn't feel like the right solution...

    Read the article

  • How can I batch convert SVG files containing text to PDF files (specifically on CentOS 5.3 x86_64)?

    - by molecules
    I would like to programatically convert SVG files to PDF files. However, the SVG files contain text that must be searchable in the generated PDF files. Also, it has to work on Red Hat Enterprise Linux 5.3 or CentOS 5.3 for the x86_64 architecture. It would be nice if it were Open Source or at least not very expensive. Here is what I've tried. All of these, except Batik, work fine on Debian Lenny. Inkscape I can get it installed using autopackages from http://inkscape.modevia.com/ap, but when I use it from the command line, the text is not searchable. Batik rasterizer [sic] When it converts SVG files to PDF files, the text is no longer searchable. svg2pdf The source for this and several of its dependencies are available to download. I have been trying to get it to compile on CentOS, but haven't had success yet. I found a precompiled version for Debian x86_64, but it doesn't work on CentOS. rsvg-convert Generated PDF isn't searchable on CentOS 5.3. Perhaps installing a newer version of cairo would help. Thanks to DaveParillo for mentioning rsvg-convert (on superuser). SOLUTION (but perhaps some of the above will still be useful to the reader) princeXML It works fine on CentOS when installed from source. For some reason it doesn't work when installed from the .rpm. Thanks Erik Dahlström! (provided solution that worked for my case on stackoverflow) Cross posted on stackoverflow

    Read the article

  • Automatic Excel Script

    - by Thomas
    I am a 6th year medical student and I'm working on my thesis. I have no experience with programming whatsoever, a friend recommended me to post my question here. I am strugling with the following problem: I have data of 400 patients, stored in 400 different excel files. Each file contains 34 columns in a specific order, let's say A to Z. The order is the same in each of these 400 files. Now I need to a make a new excel document that contains the first column of each patient. So I need all the first columns of my 400 different excel files, lined up next to each other in a new document. Preferebally in the form of a automatic script. After that I want to do the exact same thing but for the second column, then the third and so on. This is probably a problem that has already been solved. Otherwise could someone help me out? You have my thanks!

    Read the article

  • Optimum number of threads while multitasking

    - by Gun Deniz
    I know similar questions have been asked but I think my case is a little bit diffrent. Let's say I have a computer with 8 cores and infinite memory with a Linux OS. I have a calculation software called Gaussian that can take advantage of multithreading. So I set its thread count to 8 for a single calculation for maximum speed. However I really can't decide what to do when I need to do run for instance 8 calculations simultaneously. In that case should I set the thread count to 1(total 8 threads spawned in 8 processes) or keep it 8(total 64 threads spawned in 8 processes) for each job? Does it really matter much? A related question is does the OS automatically does the core-parking to diffrent cores for each thread?

    Read the article

  • Capture documents in bitonal, or grayscale then downsample

    - by Jason R. Coombs
    I'm about to embark on a document archival process. I'm going to spend a lot of good money to archive some paper (actually microfiche) to TIFF images. I have a choice of 300-dpi bitonal (2-bit, black/white) or 300-dpi grayscale (8-bit). Cost is the same for either format. Data volume (and thus image size) is not a factor. It seems to me that the grayscale, since scanned at the same resolution as the bitonal, would always contain more information and could always be downsampled to the equivalent bitonal image. Are there any downsides to selecting grayscale, and then later downsampling to bitonal if desired? In other words, is it possible that the scanning software will perform a more accurate (or more legible) representation than a grayscale image converted to bitonal?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >