Search Results

Search found 1285 results on 52 pages for 'lossless compression'.

Page 39/52 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Fixing warning from git

    - by japancheese
    I've been doing a workflow of making a git repository on a remote central repository, cloning that repo on my local dev machine, doing some work, and then pushing the changes back to the same repo on the remote server. However, and I believe this was after an update I did to git recently, after pushing up a change, I'm getting the following warning: Counting objects: 2724, done. Delta compression using up to 2 threads. Compressing objects: 100% (2666/2666), done. Writing objects: 100% (2723/2723), 5.90 MiB | 313 KiB/s, done. Total 2723 (delta 219), reused 0 (delta 0) warning: updating the currently checked out branch; this may cause confusion, as the index and working tree do not reflect changes that are now in HEAD. Can someone explain to me exactly what this warning means, and what I'm doing wrong in my workflow to not receive this warning?

    Read the article

  • New to AVL tree implementation.

    - by nn
    I am writing a sliding window compression algorithm (LZ77) that searches for phrases in a "moving" dictionary. So far I have written a BST where each node is stored in an array and it's index in the array is also the value of the starting position in the window itself. I am now looking at transforming the BST to an AVL tree. I am a little confused at the sample implementations I have seen. Some only appear to store the balance factors whereas others store the height of each tree. Are there any performance advantage/disadvantages of storing the height and/or balance factor for each node? Apologies if this is a very simple question, but I'm still not visualizing how I want to restructure my BST to implement height balancing. Thanks.

    Read the article

  • Own data format for the iPhone

    - by Stefan
    Hi, I would like to create my own data format for an iPhone app. The files should be similar structured as e.g. Apple's iWork files (.pages). That means, I have a folder with some files in it: The file 'Juicy.fruit' contains: Fruits ---> Apple.xml ---> Banana.xml ---> Pear.xml ---> PreviewPicture.png This folder "Fruits" should be packed in a handy file 'Juicy.fruit'. Compression isn't necessary. How could I achieve this? I've discovered some open source ZIP-libraries. However, I would like to to build my own data format with the iPhones built-in libs (if possible). Best regards, Stefan

    Read the article

  • IE 8 dialog windows not decompressing files

    - by Mike
    Hi, I've got a website where we have pre-compressed all of our HTML files. In general this works fine, but since IE 8 has come out some people are finding that they can not use some parts of the website. We've used the showModalDialog command to open a dialog window and pointing to one of our pre-compressed files but it displays it just show up as strange characters (ie not decompressed). Now it only happens in the dialog. I'm pretty sure our compression is all fine because the page they are viewing to open the dialog is also compressed. Has anyone else come across this or got any suggestions cuz i'm stumped??? Thanks, Mike

    Read the article

  • Extremely slow insert from Delphi to Remote MySQL Database

    - by MarkRobinson
    Having a major hair-pulling issue with extremely slow inserts from Delphi 2010 to a remote MySQL 5.09 server. So far, I have tried: ADO using MySQL ODBC Driver Zeoslib v7 Alpha I have used batching and direct insert with ADO (using table access), and with Zeos I have used SQL insertion with a Query, then used Table direct mode and also cached updates Table mode using applyupdates and commit. Both technologies I have tried with compression on and off. So far I have seen a pretty much the same across the board 7.5 records per second!!! Now, I would from this point assume that the remote server is just slow, but the MySQL Workbench is amazingly fast, and the Migration toolkit managed the initial migration very quickly (to be honest, I don't recall how quickly - which kind of means that it was quick) I'm just about to try the MyDAC components as we already use SDAC (wish there was a multi-buy discount or that we'd chosen UniDAC instead now!) Any ideas?

    Read the article

  • Untar, ungz, gz, tar - how do you remember all the useful options?

    - by deadprogrammer
    I am pretty sure I am not the only one with the following problem: every time I need to uncompress a file in *nix I can't remember all the switches, and end up googling it, which is surprizing considering how often I need to do this. Do you have a good compression cheat sheet? Or how about a mnemonic for all those nasty switches in tar? I am making this article a wiki so that we can create a nice cheat sheet here. Oh, and about man pages: is there's one thing they are not helpful for, it's for figuring out how to uncompress a file.

    Read the article

  • Encode/compress sequence of repeating integers

    - by Alex
    Hey there! I have very long integer sequences that look like this (arbitrary length!): 0000000001110002220033333 Now I need some algorithm to convert this string into something compressed like a9b3a3c3a2d5 Which means "a 9 times, then b 3 times, then a 3 times" and so on, where "a" stands for 0, "b" for 1, "c" for 2 and "d" for 3. How would you do that? So far nothing suitable came to my mind, and I had no luck with google because I didn't really know what to search for. What is this kind of encoding / compression called? PS: I am going to do the encoding with PHP, and the decoding in JavaScript.

    Read the article

  • sending binary data via POST on android

    - by wo_shi_ni_ba_ba
    Android supports a limited version of apache's http client(v4). typically if I want to send binary data using content type= application/octet-stream via POST, I do the following: HttpClient client = getHttpClient(); HttpPost method=new HttpPost("http://192.168.0.1:8080/xxx"); System.err.println("send to server "+s); if(compression){ byte[]compressed =compress(s); RequestEntity entity = new ByteArrayRequestEntity(compressed); method.setEntity(entity); } HttpResponse resp=client.execute(method); however ByteArrayRequestEntity is not supported on android. what can I do?

    Read the article

  • Are these jobs for developer or designers or for client himself? for a web-site projects [closed]

    - by jitendra
    Are these jobs for developer or for designers or for client himself? for a web-site projects. Client is asking to do all things to XHTML CSS PHP coder.. Spell checking grammar checking Descriptive alt text for big chart , graph images, technical images To write Table summary and caption Descriptive Link text Color Contrast checking Deciding in content what should be H2 ,H3, H4... and what should be <strong> or <span class="boldtext"> Meta Description and keywords for each pages Image compression To decide Filenames for images,PDf etc To decide Page's <title> for each page

    Read the article

  • Criteria for selecting software for embedded device

    - by Suresh Kumar
    We are currently evaluating Web servers for an embedded device. We have laid down the evaluation criteria for things like HTTP version, Security, Compression etc. On the embeddable side, we have identified the following criteria: Memory footprint Memory management (support for plugging in a custom memory manager) CPU usage Thread usage (support for thread pool) Portability What I want inputs on is: Are there any other criteria that an embeddable software should meet? What exactly does it mean when someone says that a software is designed for embeddable use? We currently have zeroed in on two Web servers: AppWeb Lighttpd (lighty) Feature wise, both the above Web servers seem to be on par. However, it is claimed that AppWeb is designed for embedded use while Lighttpd is not. To choose between the above two Web servers, what criteria should I be looking at?

    Read the article

  • What's the best way to convert a .eps (CMYK) to a .jpg (RGB) with Image Magick

    - by Slinky
    Hi All, I have a bunch of .eps files (CMYK) that I need to convert to .jpg (RGB) files. The following command sometimes gives me under or over saturated .jpg images, when compared to the source EPS file: $cmd = "convert -density 300 -quality 100% -colorspace RGB ".$epsURL." -flatten -strip ".$convertedURL; Is there a smarter way to do this such that the converted image will have the same qualities as the source EPS file? Here is an example of the source file info: Image: rejm.eps Format: PS (PostScript) Class: DirectClass Geometry: 537x471 Base geometry: 1074x941 Type: ColorSeparation Endianess: Undefined Colorspace: CMYK Channel depth: Cyan: 8-bit Magenta: 8-bit Yellow: 8-bit Black: 8-bit Channel statistics: Cyan: Min: 0 (0) Max: 255 (1) Mean: 161.913 (0.634955) Standard deviation: 72.8257 (0.285591) Magenta: Min: 0 (0) Max: 255 (1) Mean: 184.261 (0.722591) Standard deviation: 75.7933 (0.297229) Yellow: Min: 0 (0) Max: 255 (1) Mean: 70.6607 (0.277101) Standard deviation: 39.8677 (0.156344) Black: Min: 0 (0) Max: 195 (0.764706) Mean: 34.4382 (0.135052) Standard deviation: 38.1863 (0.14975) Total ink density: 292% Colors: 210489 Rendering intent: Undefined Resolution: 28.35x28.35 Units: PixelsPerCentimeter Filesize: 997.727kb Interlace: None Background color: white Border color: #DFDFDFDFDFDF Matte color: grey74 Page geometry: 537x471+0+0 Dispose: Undefined Iterations: 0 Compression: Undefined Orientation: Undefined Signature: 8ea00688cb5ae496812125e8a5aea40b0f0e69c9b49b2dc4eb028b22f76f2964 Profile-iptc: 19738 bytes Thanks

    Read the article

  • Looking for a good Dynamic Imaging Solution

    - by user151289
    I work for a small E-Commerce shop and we are looking for a process that will handle resizing our product images dynamically. Currently our designers take high resolution photos, either provided by the manufactures or created in house, and alter them to fit various pages on our site. The designers are constantly resizing, cropping, altering compression levels, etc., of each product photo to fit the needs of the business. Being that our product line is updated frequently, this becomes a monotonous task. Abobe Scene7 does exactly what we are looking to do and the images are served up from a CDN. Unfortunately we found it to be too expensive. I'm curious to learn how others handle this process at their organizations. Does anyone know of any good 3rd party tools or other SAAS providers that can handle performing some basic image manipulation and serving them on the fly?

    Read the article

  • Python Daemon Subprocess not working at boot

    - by Adam Richardson
    I am attempting to write a python daemon that will launch at boot. The goal of the script is to receive a job from our gearman load balancing server and complete the job. I am using the python-daemon module from pypi (http://pypi.python.org/pypi/python-daemon/). The nature of the job that it is completing is converting images in the orf (olympus raw image format) to jpeg. In order to accomplish this an outside program is used, ufraw in this case. The problem comes in when I start the daemon at boot, if I launch from the shell it runs perfectly and completes the work. When it starts at boot it is unable to launch the subprocess command. commandString = '/usr/bin/ufraw-batch --interpolation=four-color --wb=camera --compression=100 --output="' + outfile + '" --out-type=jpg --overwrite "' + infile + '"' args = shlex.split(commandString) process = subprocess.Popen(args).wait() I am not sure what I am doing wrong. Thanks for any help.

    Read the article

  • Starting a new process in a asp.net web service

    - by Deumber
    I have the following code: public void BeginConvert(object data) { ConverterData cObject = (ConverterData)data; string argument = string.Format("-i \"{0}\" -b {1} \"{2}\"", cObject.Source, compression, cObject.Destiny); Process converterProcess = new Process(); converterProcess.StartInfo.FileName = ffPath; converterProcess.StartInfo.Arguments = argument; converterProcess.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; converterProcess.Start(); converterProcess.WaitForExit(); } I use it in a webservice, i start it in a new thread and it return exit code 1 (error, i'm trying to do a video convertion with ffmpeg library), i impersonate ASP.NET to use a local account with permissions to read and write files, when i run it in my machine running or debugging it works but know thta the web service is running in IIS doest'n. Could someone help me?

    Read the article

  • php gzip xml file (53MB) casue Out of memory error

    - by ntan
    Hi, i have a 53 MB xml file that i want to gzip. The code below gzip it $gzFile = "my.gz"; $data = IMPLODE("", FILE($filename)); $gzdata = GZENCODE($data, 9); //open gz -- 'w9' is highest compression $fp = gzopen ($gzFile, 'w9'); //loop through array and write each line into the compressed file gzwrite ($fp, $gzdata); //close the file gzclose ($fp); This cause PHP Fatal error: Out of memory (allocated 70516736) (tried to allocate 24 bytes) Any one have any suggestions. I already have increase the memory in php.ini

    Read the article

  • How do I get Cabal to bypass my Windows proxy settings?

    - by Brent.Longborough
    When retrieving packages with Cabal, I frequently get errors with this message: user error (Codec.Compression.Zlib: premature end of compressed stream) It looks like Cabal is using my Windows Networking proxy settings (for Privoxy). From digging around Google, Cabal or its libraries appear to have (had) a problem in this area. Possible solutions I can see are: Turn off proxying while using Cabal (not very keen on this one); or Get a patch and start hacking. I'm hesitant to go down this path, as I'm a complete Haskell noob and I'm not yet comfortable with Darcs; or Give it the magic "can I haz no proxy" parameter. Hence the question.

    Read the article

  • How do I prevent an ASP.NET MVC deployment on IIS 6.0, using wildcard mapping, from attempting to ha

    - by Rob
    As noted by the title, what is the best way to configure an IIS 6.0 deployment of an ASP.NET MVC application such that connections to hidden shares are ignored? The application in question is using wildcard mapping to allow for clean URLs since we are planning on upgrading to IIS 7.0 in the near future and we are also handling the caching and compression issues with a custom library so we would like to avoid turning wildcard mapping off unless absolutely necessary. Below is a one of the errors from the application to give you an example of what we are seeing. -------------------------------------------------------------------------------- System.Web.HttpException -------------------------------------------------------------------------------- Time Stamp - 03 Mar 2010, 08:11:44 Path - N/A, Internal Server Operation Message - The controller for path '/C$' could not be found or it does not implement IController. Target Site - System.Web.Mvc.IController GetControllerInstance(System.Type) Stack Trace - at System.Web.Mvc.DefaultControllerFactory.GetControllerInstance(Type controllerType) at System.Web.Mvc.DefaultControllerFactory.CreateController(RequestContext requestContext, String controllerName) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) --------------------------------------------------------------------------------

    Read the article

  • Compressing a hex string in Ruby/Rails

    - by PreciousBodilyFluids
    I'm using MongoDB as a backend for a Rails app I'm building. Mongo, by default, generates 24-character hexadecimal ids for its records to make sharding easier, so my URLs wind up looking like: example.com/companies/4b3fc1400de0690bf2000001/employees/4b3ea6e30de0691552000001 Which is not very pretty. I'd like to stick to the Rails url conventions, but also leave these ids as they are in the database. I think a happy compromise would be to compress these hex ids to shorter collections using more characters, so they'd look something like: example.com/companies/3ewqkvr5nj/employees/9srbsjlb2r Then in my controller I'd reverse the compression, get the original hex id and use that to look up the record. My question is, what's the best way to convert these ids back and forth? I'd of course want them to be as short as possible, but also url-safe and simple to convert. Thanks!

    Read the article

  • Serving GZipped files from s3 using the Asset Pipeline

    - by kmurph79
    I have a Rails 3.2.3 app on Heroku and I'm using the asset_sync gem to serve my assets from s3, via these instructions. It works great, except s3 is not serving up the gzipped css/js files (just the uncompressed version). I've enabled gzip compression, to no avail: config.gzip_compression = true According to Using GZIP with html pages served from Amazon S3 I need to add meta-data to the s3 object for uploading. How would I do this in concert with the Asset Pipeline? Thank you for any help.

    Read the article

  • getAssetFileDescriptor from ZipResourceFile merges all mp3 in mediaplayer SOLVED

    - by Jordi
    I've a program with an Expansion file that stores 4 mp3 in a obb file (zip without compression). I can retrieve the data, but instead of taking the audio file i asked for, it merges ALL audio files in the same AssetFileDescriptor. ---SOLVED--- with the fixes Support class public AssetFileDescriptor getaudio(){ ZipResourceFile expansionFile = APKExpansionSupport.getAPKExpansionZipFile(c,21,21); AssetFileDescriptor afd=null; if(take==1) { afd = expansionFile.getAssetFileDescriptor("file01.mp3"); }else if(take==2 { afd = expansionFile.getAssetFileDescriptor("file02.mp3"); } //more els eif ............ return afd; } In the MediaPlayer class AssetFileDescriptor fd = Llistat.getInstance().getAudio(); mPlayer.setDataSource(fd.getFileDescriptor(), fd.getStartOffset(),fd.getLength()); mPlayer.prepare(); fd.close(); My problem was that i directly was returning and using a FileDescriptor, while i was needing the AssetFileDescriptor to take its StartOffset and Length.

    Read the article

  • Drive space hungry NoSQL's databases

    - by forum_inquisitor
    I've tested NoSQL databases like CouchDB, MongoDB and Cassandra and observed tendence to absorbing very large amount of drive space relative to inserted key-value pairs. When comparing CouchDB and MySQL schemaless databases CouchDB is consuming much more drive space than MySQL. I know about that key-value DBs by default are versioning and have long uuid and need key optimalisation - the comparison was between about 15 mln rows in MySQL and 1-5 mln documents listed NoSQL DB's. My question is : Is there any NoSQL with good compaction / compression of data? So that I can have NoSQL database with a size closer to 5GB than 50GB?

    Read the article

  • Making of a "Babbelbox" where you can speak to for partys

    - by Spidfire
    Ive got a project to make for a party, its called in holland a "Babbelbox". its a computer with a webcam and microphone that can be used to make a kind of video log of everyone who wants to say something about the party. But the problem is that i dont know where to start. ive made a kind of video show system in c but i cant save any data to a good format so it wont jam my harddisk in one hour full. Requirements: Record video + audio Recoding has to start after pressing a button Good compression over the recorded videos (would be even better if it can to be read by final cut pro or premiere pro) Light wight programm would be nice but i could scale up the computer power

    Read the article

  • What's the requests/second standard for scraping websites?

    - by feydr
    This was the closest question to my question and it wasn't really answered very well imo: http://stackoverflow.com/questions/2022030/web-scraping-etiquette I'm looking for the answer to #1: How many requests/second should you be doing to scrape? Right now I pull from a queue of links. Every site that gets scraped has it's own thread and sleeps for 1 second in between requests. I ask for gzip compression to save bandwidth. Are there standards for this? Surely all the big search engines have some set of guidelines they follow in regards to this.

    Read the article

  • Expression Encoder - Limitations for file Dimension - min size of 64 * 64 and must be a multiple of

    - by PortageMonkey
    I receive error messages when attempting to encode files in Expression Encoder when the file width or height is not a multiple of four, or is smaller than 64. I have been able to find very little in the documentation / web searches on this, and nothing that explains what settings may cause / alleviate these limitations. I assume it has something to do with the underlying data type. Error Message: Invalid Width Specified. The value must be an integer between 64 - and 4096 and be a multiple of 4. Can anyone provide further details on why / what settings can be manipulated to change this behavior: I.E. quality, compression etc.

    Read the article

  • Restoring using SyncBack without profiles

    - by Thomas Matthews
    I backed up my internal hard drive (C:) using SyncBack onto an external (USB) hard drive with maximum compression. I then performed a clean install of Windows Vista onto the computer. I forgot to copy the SyncBack logs before the clean install. And now when ever I try to restore a directory, the RAR/ZIP files are copied to the system hard drive instead of extracting their contents to the hard drive. Also, SyncBack is not traversing the folders during the Restore process. How can I tell SyncBack to expand the compressed files? I am running the freeware version of SyncBack. I have to create new log files (unless SyncBack put them somewhere on the external drive). My alternative is to write a program that traverses the folders on the external drive and extracts files from the RAR/ZIP files. I am using Windows Vista, Service Pack 2, and the data size prior to backup was about 200 GB. (The backup process took over 72 hours due to "hiccups").

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >