Search Results

Search found 1246 results on 50 pages for 'backupup compression'.

Page 38/50 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Criteria for selecting software for embedded device

    - by Suresh Kumar
    We are currently evaluating Web servers for an embedded device. We have laid down the evaluation criteria for things like HTTP version, Security, Compression etc. On the embeddable side, we have identified the following criteria: Memory footprint Memory management (support for plugging in a custom memory manager) CPU usage Thread usage (support for thread pool) Portability What I want inputs on is: Are there any other criteria that an embeddable software should meet? What exactly does it mean when someone says that a software is designed for embeddable use? We currently have zeroed in on two Web servers: AppWeb Lighttpd (lighty) Feature wise, both the above Web servers seem to be on par. However, it is claimed that AppWeb is designed for embedded use while Lighttpd is not. To choose between the above two Web servers, what criteria should I be looking at?

    Read the article

  • What's the best way to convert a .eps (CMYK) to a .jpg (RGB) with Image Magick

    - by Slinky
    Hi All, I have a bunch of .eps files (CMYK) that I need to convert to .jpg (RGB) files. The following command sometimes gives me under or over saturated .jpg images, when compared to the source EPS file: $cmd = "convert -density 300 -quality 100% -colorspace RGB ".$epsURL." -flatten -strip ".$convertedURL; Is there a smarter way to do this such that the converted image will have the same qualities as the source EPS file? Here is an example of the source file info: Image: rejm.eps Format: PS (PostScript) Class: DirectClass Geometry: 537x471 Base geometry: 1074x941 Type: ColorSeparation Endianess: Undefined Colorspace: CMYK Channel depth: Cyan: 8-bit Magenta: 8-bit Yellow: 8-bit Black: 8-bit Channel statistics: Cyan: Min: 0 (0) Max: 255 (1) Mean: 161.913 (0.634955) Standard deviation: 72.8257 (0.285591) Magenta: Min: 0 (0) Max: 255 (1) Mean: 184.261 (0.722591) Standard deviation: 75.7933 (0.297229) Yellow: Min: 0 (0) Max: 255 (1) Mean: 70.6607 (0.277101) Standard deviation: 39.8677 (0.156344) Black: Min: 0 (0) Max: 195 (0.764706) Mean: 34.4382 (0.135052) Standard deviation: 38.1863 (0.14975) Total ink density: 292% Colors: 210489 Rendering intent: Undefined Resolution: 28.35x28.35 Units: PixelsPerCentimeter Filesize: 997.727kb Interlace: None Background color: white Border color: #DFDFDFDFDFDF Matte color: grey74 Page geometry: 537x471+0+0 Dispose: Undefined Iterations: 0 Compression: Undefined Orientation: Undefined Signature: 8ea00688cb5ae496812125e8a5aea40b0f0e69c9b49b2dc4eb028b22f76f2964 Profile-iptc: 19738 bytes Thanks

    Read the article

  • Looking for a good Dynamic Imaging Solution

    - by user151289
    I work for a small E-Commerce shop and we are looking for a process that will handle resizing our product images dynamically. Currently our designers take high resolution photos, either provided by the manufactures or created in house, and alter them to fit various pages on our site. The designers are constantly resizing, cropping, altering compression levels, etc., of each product photo to fit the needs of the business. Being that our product line is updated frequently, this becomes a monotonous task. Abobe Scene7 does exactly what we are looking to do and the images are served up from a CDN. Unfortunately we found it to be too expensive. I'm curious to learn how others handle this process at their organizations. Does anyone know of any good 3rd party tools or other SAAS providers that can handle performing some basic image manipulation and serving them on the fly?

    Read the article

  • Python Daemon Subprocess not working at boot

    - by Adam Richardson
    I am attempting to write a python daemon that will launch at boot. The goal of the script is to receive a job from our gearman load balancing server and complete the job. I am using the python-daemon module from pypi (http://pypi.python.org/pypi/python-daemon/). The nature of the job that it is completing is converting images in the orf (olympus raw image format) to jpeg. In order to accomplish this an outside program is used, ufraw in this case. The problem comes in when I start the daemon at boot, if I launch from the shell it runs perfectly and completes the work. When it starts at boot it is unable to launch the subprocess command. commandString = '/usr/bin/ufraw-batch --interpolation=four-color --wb=camera --compression=100 --output="' + outfile + '" --out-type=jpg --overwrite "' + infile + '"' args = shlex.split(commandString) process = subprocess.Popen(args).wait() I am not sure what I am doing wrong. Thanks for any help.

    Read the article

  • Starting a new process in a asp.net web service

    - by Deumber
    I have the following code: public void BeginConvert(object data) { ConverterData cObject = (ConverterData)data; string argument = string.Format("-i \"{0}\" -b {1} \"{2}\"", cObject.Source, compression, cObject.Destiny); Process converterProcess = new Process(); converterProcess.StartInfo.FileName = ffPath; converterProcess.StartInfo.Arguments = argument; converterProcess.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; converterProcess.Start(); converterProcess.WaitForExit(); } I use it in a webservice, i start it in a new thread and it return exit code 1 (error, i'm trying to do a video convertion with ffmpeg library), i impersonate ASP.NET to use a local account with permissions to read and write files, when i run it in my machine running or debugging it works but know thta the web service is running in IIS doest'n. Could someone help me?

    Read the article

  • php gzip xml file (53MB) casue Out of memory error

    - by ntan
    Hi, i have a 53 MB xml file that i want to gzip. The code below gzip it $gzFile = "my.gz"; $data = IMPLODE("", FILE($filename)); $gzdata = GZENCODE($data, 9); //open gz -- 'w9' is highest compression $fp = gzopen ($gzFile, 'w9'); //loop through array and write each line into the compressed file gzwrite ($fp, $gzdata); //close the file gzclose ($fp); This cause PHP Fatal error: Out of memory (allocated 70516736) (tried to allocate 24 bytes) Any one have any suggestions. I already have increase the memory in php.ini

    Read the article

  • How do I get Cabal to bypass my Windows proxy settings?

    - by Brent.Longborough
    When retrieving packages with Cabal, I frequently get errors with this message: user error (Codec.Compression.Zlib: premature end of compressed stream) It looks like Cabal is using my Windows Networking proxy settings (for Privoxy). From digging around Google, Cabal or its libraries appear to have (had) a problem in this area. Possible solutions I can see are: Turn off proxying while using Cabal (not very keen on this one); or Get a patch and start hacking. I'm hesitant to go down this path, as I'm a complete Haskell noob and I'm not yet comfortable with Darcs; or Give it the magic "can I haz no proxy" parameter. Hence the question.

    Read the article

  • How do I prevent an ASP.NET MVC deployment on IIS 6.0, using wildcard mapping, from attempting to ha

    - by Rob
    As noted by the title, what is the best way to configure an IIS 6.0 deployment of an ASP.NET MVC application such that connections to hidden shares are ignored? The application in question is using wildcard mapping to allow for clean URLs since we are planning on upgrading to IIS 7.0 in the near future and we are also handling the caching and compression issues with a custom library so we would like to avoid turning wildcard mapping off unless absolutely necessary. Below is a one of the errors from the application to give you an example of what we are seeing. -------------------------------------------------------------------------------- System.Web.HttpException -------------------------------------------------------------------------------- Time Stamp - 03 Mar 2010, 08:11:44 Path - N/A, Internal Server Operation Message - The controller for path '/C$' could not be found or it does not implement IController. Target Site - System.Web.Mvc.IController GetControllerInstance(System.Type) Stack Trace - at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(Type controllerType) at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) --------------------------------------------------------------------------------

    Read the article

  • Compressing a hex string in Ruby/Rails

    - by PreciousBodilyFluids
    I'm using MongoDB as a backend for a Rails app I'm building. Mongo, by default, generates 24-character hexadecimal ids for its records to make sharding easier, so my URLs wind up looking like: example.com/companies/4b3fc1400de0690bf2000001/employees/4b3ea6e30de0691552000001 Which is not very pretty. I'd like to stick to the Rails url conventions, but also leave these ids as they are in the database. I think a happy compromise would be to compress these hex ids to shorter collections using more characters, so they'd look something like: example.com/companies/3ewqkvr5nj/employees/9srbsjlb2r Then in my controller I'd reverse the compression, get the original hex id and use that to look up the record. My question is, what's the best way to convert these ids back and forth? I'd of course want them to be as short as possible, but also url-safe and simple to convert. Thanks!

    Read the article

  • getAssetFileDescriptor from ZipResourceFile merges all mp3 in mediaplayer SOLVED

    - by Jordi
    I've a program with an Expansion file that stores 4 mp3 in a obb file (zip without compression). I can retrieve the data, but instead of taking the audio file i asked for, it merges ALL audio files in the same AssetFileDescriptor. ---SOLVED--- with the fixes Support class public AssetFileDescriptor getaudio(){ ZipResourceFile expansionFile = APKExpansionSupport.getAPKExpansionZipFile(c,21,21); AssetFileDescriptor afd=null; if(take==1) { afd = expansionFile.getAssetFileDescriptor("file01.mp3"); }else if(take==2 { afd = expansionFile.getAssetFileDescriptor("file02.mp3"); } //more els eif ............ return afd; } In the MediaPlayer class AssetFileDescriptor fd = Llistat.getInstance().getAudio(); mPlayer.setDataSource(fd.getFileDescriptor(), fd.getStartOffset(),fd.getLength()); mPlayer.prepare(); fd.close(); My problem was that i directly was returning and using a FileDescriptor, while i was needing the AssetFileDescriptor to take its StartOffset and Length.

    Read the article

  • Serving GZipped files from s3 using the Asset Pipeline

    - by kmurph79
    I have a Rails 3.2.3 app on Heroku and I'm using the asset_sync gem to serve my assets from s3, via these instructions. It works great, except s3 is not serving up the gzipped css/js files (just the uncompressed version). I've enabled gzip compression, to no avail: config.gzip_compression = true According to Using GZIP with html pages served from Amazon S3 I need to add meta-data to the s3 object for uploading. How would I do this in concert with the Asset Pipeline? Thank you for any help.

    Read the article

  • Drive space hungry NoSQL's databases

    - by forum_inquisitor
    I've tested NoSQL databases like CouchDB, MongoDB and Cassandra and observed tendence to absorbing very large amount of drive space relative to inserted key-value pairs. When comparing CouchDB and MySQL schemaless databases CouchDB is consuming much more drive space than MySQL. I know about that key-value DBs by default are versioning and have long uuid and need key optimalisation - the comparison was between about 15 mln rows in MySQL and 1-5 mln documents listed NoSQL DB's. My question is : Is there any NoSQL with good compaction / compression of data? So that I can have NoSQL database with a size closer to 5GB than 50GB?

    Read the article

  • Making of a "Babbelbox" where you can speak to for partys

    - by Spidfire
    Ive got a project to make for a party, its called in holland a "Babbelbox". its a computer with a webcam and microphone that can be used to make a kind of video log of everyone who wants to say something about the party. But the problem is that i dont know where to start. ive made a kind of video show system in c but i cant save any data to a good format so it wont jam my harddisk in one hour full. Requirements: Record video + audio Recoding has to start after pressing a button Good compression over the recorded videos (would be even better if it can to be read by final cut pro or premiere pro) Light wight programm would be nice but i could scale up the computer power

    Read the article

  • What's the requests/second standard for scraping websites?

    - by feydr
    This was the closest question to my question and it wasn't really answered very well imo: http://stackoverflow.com/questions/2022030/web-scraping-etiquette I'm looking for the answer to #1: How many requests/second should you be doing to scrape? Right now I pull from a queue of links. Every site that gets scraped has it's own thread and sleeps for 1 second in between requests. I ask for gzip compression to save bandwidth. Are there standards for this? Surely all the big search engines have some set of guidelines they follow in regards to this.

    Read the article

  • Expression Encoder - Limitations for file Dimension - min size of 64 * 64 and must be a multiple of

    - by PortageMonkey
    I receive error messages when attempting to encode files in Expression Encoder when the file width or height is not a multiple of four, or is smaller than 64. I have been able to find very little in the documentation / web searches on this, and nothing that explains what settings may cause / alleviate these limitations. I assume it has something to do with the underlying data type. Error Message: Invalid Width Specified. The value must be an integer between 64 - and 4096 and be a multiple of 4. Can anyone provide further details on why / what settings can be manipulated to change this behavior: I.E. quality, compression etc.

    Read the article

  • Restoring using SyncBack without profiles

    - by Thomas Matthews
    I backed up my internal hard drive (C:) using SyncBack onto an external (USB) hard drive with maximum compression. I then performed a clean install of Windows Vista onto the computer. I forgot to copy the SyncBack logs before the clean install. And now when ever I try to restore a directory, the RAR/ZIP files are copied to the system hard drive instead of extracting their contents to the hard drive. Also, SyncBack is not traversing the folders during the Restore process. How can I tell SyncBack to expand the compressed files? I am running the freeware version of SyncBack. I have to create new log files (unless SyncBack put them somewhere on the external drive). My alternative is to write a program that traverses the folders on the external drive and extracts files from the RAR/ZIP files. I am using Windows Vista, Service Pack 2, and the data size prior to backup was about 200 GB. (The backup process took over 72 hours due to "hiccups").

    Read the article

  • Apache server-side files caching via .htaccess?

    - by purpler
    Hi, I'm starting new website and gonna include several JS libs and would like to know how .htaccess file template should look like with caching of media and JS files on? Whats better for compression, GZip or Deflate? Is it better/faster solution to serve those JS libs of the Google CDN perhaps then locally? I'm asking CDN question since some of scripts served off GoogleCDN are potentially going to update and eventually break the website layout so i thought it would be better for me to host them locally and cache via webserver if its going to work with same/near-same speed.

    Read the article

  • Pushing app to heroku error

    - by Ryan Max
    Hello, I am getting the following error when I try to push my app to heroku. I saw a similar thread on here, but the issues seemed related to OSX. I am running windows 7 $ git push heroku master Counting objects: 1652, done. Delta compression using up to 4 threads. fatal: object 91f5d3ee9e2edcd42e961ed2eb254d5181cbc734 inconsistent object lengt h (476 vs 8985) error: pack-objects died with strange error error: failed to push some refs to '[email protected]:floating-stone-94.git I'm not sure what this means. I can't find any consistent answers on the internet. I tried re-creating my ssh public key but still the same.

    Read the article

  • Traffic consumed by Team Foundation Server 2010

    - by micha12
    We are currently selecting a source control and issue tracking software, and are looking towards Team Foundation Server 2010. Some participants of our project often have slow Internet connection (for example during travel), and therefore it is important for us to have a source control system that does not consume too much traffic. I was unable to find information on traffic consumption when using TFS 2010. Does anyone has such info? Does TFS 2010 support traffic compression? Do other source control systems (like SVN, for example) produce less or more traffic than TFS 2010?

    Read the article

  • GZIP .htaccess and php session problem

    - by Suresh
    Hi, I am trying to implement GZIP compression for my website. I copied the below code in my .htaccess file: ExpiresActive On ExpiresDefault A604800 Header append Cache-Control "public" <IfModule mod_deflate.c> <FilesMatch "\.(js|css)$"> SetOutputFilter DEFLATE </FilesMatch> </IfModule> what happens is when I type username and password the page reloads but still the login form is displayed but session is set. When I refresh the page using ctrl + R the login form goes and the username is displayed. what will be the problem. wwaiting for ur reply.

    Read the article

  • Any way to chunk gzip with Apache and PHP

    - by donatJ
    I have a web application on a site that takes a while (~10 seconds) to complete a portion of the page near the bottom - it has been as optimized as it can be, and caching is not an option. We have compression enabled on the server via an .htaccess directive SetOutputFilter DEFLATE the problem is this causes the whole page to be held until completion before it starts outputting to the user, this is not optimal as the user sees nothing until the page completes. I have also tried it via the php ob_start("ob_gzhandler"); method. Currently I have a <FilesMatch > in my .htaccess restricting this specific script from being compressed. Basically my question is this - Is there a way to say chunk gzip or deflate so that the user gets it in pieces, so they can see that the page has begun loading?

    Read the article

  • android steganography

    - by poo123
    Im doing steganography on android...my code is as below.. public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mBitmap = BitmapFactory.decodeResource(getResources(), R.drawable.src); picw = mBitmap.getWidth(); pich = mBitmap.getHeight(); pix= new int[picw * pich]; mBitmap.getPixels(pix, 0, picw, 0, 0, picw, pich); try { FileOutputStream fos = super.openFileOutput("dest.png", MODE_WORLD_READABLE); mBitmap.compress(CompressFormat.PNG, 100, fos); fos.flush(); fos.close(); }catch (Exception e) { tv.setText(e.getMessage()); } but whenever i save source image with Bitmap.compress() method pix[0] value before and after compression changed..so i'm unable to extract original data...please help me

    Read the article

  • Where does Subversion physically stores its DataBase ?

    - by Mika Jacobi
    After reading many introductions, starting guides, and documentation on SVN, I still cannot figure out where is my versioning data stored. I mean physically. I have over 3 GB of code checked in, and the repo is just a few MB large. This is still Voodoo for me. And, as a coder, I don't really believe in Magic. EDIT : A contributor stated that not all the code was stored in the repo, is that true ? I mean, if I delete my local working copy I still can get back my source code for the repository... If so, I still can't understand how such a compression can occur on my code...

    Read the article

  • Is node.js ready for production use?

    - by Simon Wentley
    Starting a new project. It's basically a blogging/commenting system. We're considering node.js as the back end server. Is node.js ready for this sort of thing or is it too early and experimental? We need HTTPS and gzip compression - perhaps a front end nginx server could provide this? What's missing from node.js that would make developing a web app difficult? From a production ready perspective, we're wondering if it is stable enough for building a commercial app on top of. Thanks

    Read the article

  • Why is C++ fwrite() producing larger output in release?

    - by waffleShirt
    I recently wrote an implementation of the Canonical Huffman compression algorithm. I have a 500kb test file that can be compressed to about 250kb when running the debug and release builds from within Visual Studio 2008. However when I run the release build straight from the executeable the test file only compresses to about 330kb. I am assuming that something is going wrong when the file is written using fwrite(). I have tested the program and confirmed that uncompressing the files always produces the correct uncompressed file. Does anyone know why this could possibly be? How could the same executeable file be producing different sized outputs based on where it is launched from?

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >