Search Results

Search found 4432 results on 178 pages for 'amazon mp3 downloader'.

Page 99/178 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • Efficient mirroring of directories using hardlinks

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Efficient mirroring of directories using hard links [closed]

    - by zoqaeski
    I'm backing up my music collection on to a number of NTFS-formatted external hard-drives; however, as I store my main collection in FLAC and have my library on my laptop as MP3s to save space, I want to be able to back up both sets, because mass conversion between formats is time-consuming. The "music" directory can contain any format; the "mp3s" directory contains only MP3s converted from files in the "music" directory. The music collection on the laptop contains only MP3s, but they come from both sources. When I backup my laptop's library to the "mp3s" directory, I want to only copy across MP3 files that don't exist in the "music" directory; those that do should be hard-linked to the "music" directory. All directories have an identical hierarchy, sorted by artist, album, date, discnumber if applicable, etc, and I use a tagging editor to ensure consistency across all these locations. I'm also using a Linux computer, but keeping the music collections on NTFS-formatted partitions so that they are readable by both Linux and Windows. At the moment, I use the following command to perform the backups, but this is time-consuming due to the expensive nature of finding hard links. rsync -avu --progress --relative --ignore-existing --link-dest=../music/ **/*.mp3 /media/ntfspocket/mp3s Is there a way to perform this backup more efficiently, taking advantage of the directory hierarchy?

    Read the article

  • Double-byte characters in querystring using PHP

    - by Jeffrey Berthiaume
    I'm trying to figure out how to create personalized urls for double-byte languages. For example, this url from Amazon Japan has Japanese characters within the querystring (specifically, the path): http://www.amazon.co.jp/????????-DVD-???/dp/B00005R5J3/ref=sr_1_3?ie=UTF8&s=dvd&qid=1269891925&sr=8-3 What I would like to do is have: http://www.mysite.com/???????? or even http://www.mysite.com/index.php?name=???????? be able to properly decode the $GET[name] string. I think I have tried all of the urldecode and utf8_decode possibilities, but I just get gibberish in response. This all works fine in a form $_POST, but I need these urls to be emailable...

    Read the article

  • using Java interfaces

    - by mike_hornbeck
    I need to create interface MultiLingual, that allows to display object's data in different languages (not data itself, but introduction like "Author", "Title" etc.). Printed data looks like this : 3 grudnia 1998 10th of June 1924 Autor: Tolkien Tytul: LoTR Wydawnictwo: Amazon 2010 Author: Mitch Albom Title: Tuesdays with Morrie Publishing House: Time Warner Books 2003 37 360,45 PLN 5,850.70 GBP 3rd of December 1998 10th of June 1924 Author: Tolkien Title: LoTR Publishing House: Amazon 2010 Author: Mitch Albom Title: Tuesdays with Morrie Publishing House: Time Warner Books 2003 37,360.45 GBP 5,850.70 GBP Test code looks like this : public class Main { public static void main(String[] args){ MultiLingual gatecrasher[]={ new Data(3,12,1998), new Data(10,6,1924,MultiLingual.ENG), new Book("LoTR", "Tolkien", "Amazon", 2010), new Book("Tuesdays with Morrie", "Mitch Albom", "Time Warner Books",2003, MultiLingual.ENG), new Money(1232895/33.0,MultiLingual.PL), new Money(134566/23.0,MultiLingual.ENG), }; for(int i=0;i < gatecrasher.length;i++) System.out.println(gatecrasher[i]+"\n"); for(int i=0;i < gatecrasher.length;i++) System.out.println(gatecrasher[i].get(MultiLingual.ENG)+"\n"); } } So i need to introduce constants ENG, PL in MultiLingual interface, as well as method get(int language) : public interface MultiLingual { int ENG = 0; int PL= 1; String get(int lang); } And then I have class Book. Problem starts with the constructors. One of them needs to take MultiLingual.ENG as argument, but how to achieve that ? Is this the proper way? : class Book implements MultiLingual { private String title; private String publisher; private String author; public Book(String t, String a, String p, int y, MultiLingual lang){ } Or should I treat this MultiLingual.ENG as int variable , that will just change automatically constants in interface? Second constructor for book doesn't take MultLingual as argument, but following implementation is somehow wrong : public Book(String t, String a, String p, int y){ Book someBook = new Book(t, a, p, y, MultiLingual m); } I could just send int m in place of MultiLingual m but then I will have no control if language is set to PL or ENG. And finally get() method for Boook but I think at least this should be working fine: public String get(int lang){ String data; if (lang == ENG){ data = "Author: "+this.author+"\n"+ "Title: "+this.title+"\n"+ "Publisher: "+this.publisher+"\n"; } else { data = "Autor: "+this.author+"\n"+ "Tytul: "+this.title+"\n"+ "Wydawca: "+this.publisher+"\n"; } return data; } @Override public String toString(){ return ""; } }

    Read the article

  • How can I use Spring Security without sessions?

    - by Jarrod
    I am building a web application with Spring Security that will live on Amazon EC2 and use Amazon's Elastic Load Balancers. Unfortunately, ELB does not support sticky sessions, so I need to ensure my application works properly without sessions. So far, I have setup RememberMeServices to assign a token via a cookie, and this works fine, but I want the cookie to expire with the browser session (e.g. when the browser closes). I have to imagine I'm not the first one to want to use Spring Security without sessions... any suggestions?

    Read the article

  • Good book(s) for MMORPG design & implementation?

    - by mawg
    I am a long time professional C/C++ programmer (mostly embedded systems) and a hobbyist windows & php hacker. Can anyone recommend a book(s) specifically aimed at designing and (hopefully) implementing an MMORPG? I don't need general how to design or how to code books. Maybe a really good generic games book, but I am not interested in 1st person shooters, I want to know what it takes to implement an MMORPG. Good books, maybe also good URLs. Thanks just searching eBay and Amazon threw up a whole slew of books. Amazon's customer reviews give me an idea of how good they are, and the overview tells me what areas they cover

    Read the article

  • Creating a broswed history menu

    - by pundit
    Hi guys, I'm sure many of you have visited amazon.com. When you do, amazon create a list of browsed menu items at the very bottom of the home page. I am currently doing a project that applies personalisation and customisation and wanted to implement something similar. my prototype is based on an institution, so i want to display a list of probably the last 5 viewed programmes or courses on the home page. I am using php and so far i have thought of using $_SERVER["HTTP_REFERER"], but basically returns the last url..Which is not what i want. Does any one have any suggestions that could probably help with this. thanks.

    Read the article

  • Creating a browsed history menu

    - by pundit
    Hi guys, I'm sure many of you have visited amazon.com. When you do, amazon creates a list of browsed menu items at the very bottom of the home page. I am currently doing a project that applies personalisation and customisation and wanted to implement something similar. My prototype is based on an institution, so I want to display a list of, say, the last 5 viewed programmes or courses on the home page. I am using PHP and so far i have thought of using $_SERVER["HTTP_REFERER"], but this returns only the last URL, which is not what I want. Does anyone have any suggestions to help me with this? Thanks.

    Read the article

  • Can't find Peopleware anywhere?

    - by ooo
    Many folks say that Peopleware is one of the best books for software professionals and managers, as I see a lot of people recommending it in the "have to read list." The strange thing is that I can't find a bookstore anywhere that actually has it. I found it on Amazon, but Borders, Barnes & Nobles, etc. don't have it and keep telling me it's out of print. Can anyone shed some light on whats going on here? Amazon doesn't stock it, it says its available from a few 3rd party sellers, but I tried two of them and both of them eventually refunded me and cancelled the order after stalling for a month.

    Read the article

  • Online payment service recommendation?

    - by Shadowman
    We're currently in the process of looking for an online payment service that will allow us to accept credit cards, etc. However, our business model also involves revenue sharing in a model similar to that of iTunes. That is, content creators will be able to sell content through our site and we take a small percentage of the revenue. Can anyone recommend an online payment service that supports this model? We're also interested in: Accept all major credit cards Being able to do international transactions in the appropriate local currency Recurring transactions (monthly, yearly, etc.) Additionally, if the service provided a Java API for integration or the ability to broker PayPal transactions that would be an added bonus. I know Amazon provides a hosted payment service, but I'd prefer not to require all of our customers to have an Amazon account. That provides an additional barrier to entry that we'd prefer to avoid. I'd appreciate any recommendations you can provide!

    Read the article

  • Can you detect if and excel find and replace is active during worksheet_change()?

    - by John Griffiths
    Hi I've just crashed excel using amazon spreadsheet to update feed. When doing find and replace [replace all] with 2 cells selected after the first replacement the worksheet_change() function finished with the whole spreadsheet selected. This meant that the replacements took place outside of the original area. Unfortunatly the replcement text included the find text and each replacement re-selected the entire area excel ran until it ran out of space then crashed. Pressing control-break brings up the vba dialog STOP/CONTINUE/DEBUG. DEBUG is greyed out as amazon had protected the sheet. STOP would stop one run but would then continue to crash. CONTINUE would switch back to the current change and continue to crash. Is there any way to detect if a find&replace operation is in action whilst executing excel vba? Regards John

    Read the article

  • Why call iframe from javascript

    - by sammville
    I want to know why some ad codes or embed codes don't directly give you iframe code to embed on your site instead they give a javascript code which links to another javascript file on their server. The file on their server calls the iframe which serves the content. Why is this done and what are the benefits of this method. Example: this is the code issued by amazon: <script type="text/javascript" src="http://www.assoc-amazon.co.uk/s/ads.js"></script> Which opens another javascript file that calls the iframe.

    Read the article

  • Should I create separate Work and Personal Github accounts?

    - by Almost Surely
    I'm fairly new to programming, and I've been working on many personal projects, which I'm concerned can come across as silly/unprofessional. The kind of projects I have are a Reddit Image Downloader and a tool for GM's to use in roleplaying games. I want to start building up a Github for projects in my chosen field of Data Analytics, but I'm not sure how to orgaqnize projects on my Github account. Should I create a "Professional" Github, mainly containing different analytical scripts and have a separate "Personal" account for fun little projects of mine? Or am I just overthinking this and should I just maintain account?

    Read the article

  • What GUI are there for Axel or for other such downloaders that use multiple connections?

    - by cipricus
    In order to enjoy my maximum download speed, I use and like Axel very much, but from time to time I download multiple files and having so many windows opened has some disadvantages. I use Axel with FlashGot in Firefox (Seamonkey etc) but I would like to add a GUI for that, and possibly have multiple downloads in a nice list as in any civil downloader. I am not aware of a GUI for Axel that works. Axel-kapt crashes. (A question on how to use it properly in Ubuntu got only one somewhat dismissive answer by yours truly...) Gaxel just opens a window with empty fields that I have to manually fill (which beats the purpose). I would like to know how to install something like Gwget which is described here, in an old answer as an alternative (but Gwget itself might be too old too). Help!

    Read the article

  • Is there any way to add a new location to the list of places where nltk looks for the wordnet corpus?

    - by Programming Noob
    I can't use the nltk wordnet lemmatizer because I can't download the wordnet corpus on my university computer due to access rights issues. I get the following error when I try to do so: ********************************************************************** Resource 'corpora/wordnet' not found. Please use the NLTK Downloader to obtain the resource: >>> nltk.download() Searched in: - '/home/XX/nltk_data' - '/usr/share/nltk_data' - '/usr/local/share/nltk_data' - '/usr/lib/nltk_data' - '/usr/local/lib/nltk_data' ********************************************************************** When I had the same issue at home, I could resolve it by two ways: Using nltk.download(), the standard way and Creating a new folder at location /home/XX/nltk_data and just pasting the corpus directory inside it. Now at the university I only have access to /home/XX/bin and not /home/XX directly. So is there anyway I could paste the wordnet corpus into /home/XX/bin and then somehow make nltk look for the corpus in that folder?

    Read the article

  • problem connecting to magento connect

    - by amir
    hi, I'm using magento 1.4.0 and when I try to get to magento connect and download a plugin the page will say Error: Please check for sufficient write file permissions Your Magento folder does not have sufficient write permissions, which this web based downloader requires. If you wish to proceed downloading Magento packages online, please set all Magento folders to have writable permission for the web server user (example: apache) and press the "Refresh" button to try again. does anyone know how I can fix this problem, thanks Update: the plugin I'm trying to use is MagentoPycho light box so I unpacked the folder into the app/code/local but it still doesn't show in the admin area

    Read the article

  • Is it possible to prioritize which folders get synced first when using Ubuntu One?

    - by Philippe
    I face the problem that u1 syncs my files to a given order. I'd like to change that order. Consider that: On a week end I work and I may also copy the content of my photo SD card onto my notebook. The next time I boot my work computer, I might be sitting there and waiting for some hours until U1 synced/downloaded all the photos to my workstation and the files I need for work are the last in the '--waiting' list. I don't mind if Ubuntu One is a slow downloader, I would be just happy if I could define that all files in a certain folder (and all of it subfolders) always need to be downloaded first. I'm aware that there was once the possibility to move some files to the beginning of the sync list. But that was a very clumsy way with providing the folder id etc. and in the current version of u1 I can't even find it any more. Any suggestions on how to prioritize always the same folder?

    Read the article

  • Maximum file size for iFrame in IE7

    - by Peter Turner
    I've got a "super secure" javascript downloader* that I wrote, and it usually works alright. But I noticed, while trying to download a 90 meg file with it on a client's machine that on IE7, it's getting hung up about 1/3rd of the way through. I've never tried to send a file that large through the iFrame and it works fine in other browsers. Is there a size restriction on files that IE7 can read in an iFrame? * It's really just a PHP line that sets header("location: http://someplace/downloadbigthing.exe"); after it does some logging and verification.

    Read the article

  • Software to automatically download Youtube videos

    - by Joren
    I'm looking for (free) Ubuntu YouTube software that can perform two tasks (could be separate software): display on screen notification (like transmission does when a download has finished) when a new video has been uploaded to your personal subscription box download videos in max quality (preferably automatically once a new video has been uploaded from specific channels / series) What I've found so far: All Video Downloader: only downloads manually, can't select quality MiniTube: Doesn't associate with your personal account, doesn't notify when new video has been uploaded from your subscriptions. Annoying GUI. Quite buggy. If this software doesn't exist yet, I'll try to make it myself.

    Read the article

  • problem connecting to magento connect

    - by amir
    I'm using magento 1.4.0 and when I try to get to magento connect and download a plugin the page will say Error: Please check for sufficient write file permissions Your Magento folder does not have sufficient write permissions, which this web based downloader requires. If you wish to proceed downloading Magento packages online, please set all Magento folders to have writable permission for the web server user (example: apache) and press the "Refresh" button to try again. does anyone know how I can fix this problem, thanks Update: the plugin I'm trying to use is MagentoPycho light box so I unpacked the folder into the app/code/local but it still doesn't show in the admin area

    Read the article

  • How to download streaming video from a flash player (arte)?

    - by wim
    there are a couple of concerts that I would very much like to view from Arte (I think it's a french TV channel?) but my connection is not good enough to stream the video. How can I download the file to play it locally? Here is an example link: http://www.my-jazzlive.tv/?p=1862 I have tried popular browser plugins such as DownloadHelper and Flash Video Downloader, these are working fine for me on sites such as youtube but they don't seem to recognise any stream from the Arte video player. I also looked through /tmp for something that looks like a partially downloaded video, but no luck.

    Read the article

  • I am unable to use the "wubi" install I get the message "ERROR TaskList: Cannot download the metalink and therefore the ISO"

    - by pat
    used WUBI a few months ago on both XP and win7 systems with no problem. Unable to install on either for the last 2 weeks? p From the log 09-05 11:36 DEBUG CommonBackend: Could not find any ISO or CD, downloading one now 09-05 11:36 DEBUG TaskList: New task get_metalink 09-05 11:36 DEBUG TaskList: ### Running get_metalink... 09-05 11:36 DEBUG downloader: downloading http://cdimage.ubuntu.com/xubuntu/releases/12.04/release/xubuntu-12.04-desktop-amd64.metalink > C:\ubuntu\install 09-05 11:36 ERROR CommonBackend: Cannot download metalink file http://cdimage.ubuntu.com/xubuntu/releases/12.04/release/xubuntu-12.04-desktop-amd64.metalink err=[Errno 14] HTTP Error 404: Not Found

    Read the article

  • How does the Cloud compare to Colocation? And development too

    - by David
    Currently I/we run a SaaS web application where each subscriber has their own physical instance of the application in addition to their own database. The setup has each web application instance deployed on two different IIS boxes both for load-balancing and redundancy (the machines have their Windows Update install times 12 hours apart, for example). Databases are mirrored on two different SQL Server 2012 machines with AlwaysOn for uptime. I don't make use of SQL Server clustering (as it doesn't provide storage-level failover: we don't have a shared storage box). Because it's a Windows setup it means there are two Domain Controllers (we cheat: they're both Mac Minis, 17W each, which keeps our colo power costs low). Finally there's also an Exchange server (Mailbox, Hub Transport and Client Access). One of the SQL Servers also doubles-up as an Exchange Hub Transport. Running costs are about $700 a month for our quarter-rack colocation (which includes power and peering/transfer), then there's about $150 a month for SPLA licensing, so $850 a month in total. Then there's the hard-to-quantify cost of administration, but I reckon I spend a couple of hours a week checking-in on the servers: reviewing event logs, etc. I keep getting bombarded by ads and manufactured news stories about how great "the cloud" is. Back in 2008 when the cloud was taking off I was reading up about the proper "cloud" services like Google AppEngine, where you write in Python against Google's API and that's how they scale your application across servers and also use their database provider for scaling storage. Simple enough to understand. Then came along Amazon, and I understand how Amazon Storage works, but I'm not sure how Amazon Compute works: web application pages don't take much CPU time to compute, how do you even quantify usage anyway? Finally, RackSpace gets in the act and now I'm really confused. RackSpace advertise "Cloud" SQL Server 2012 available for about "$0.70 per hour", going by how they advertise it I thought the "hour" meant the sum of CPU time, IO blocking time, maybe time spent transferring data, so for a low-intensity application that works out pretty cheap then? Nope. I went on to a Sales Chat window and spoke to one of their advisors. They told me the $0.70/hour was actually for every hour the SQL Server is running... but who wants a SQL Server for only a few hours? You're going to need it available 24 hours a day for months on end. $0.70 * 24 * 31 works out at $520 a month, which is rediculously expensive for SQL Server. An SPLA license for SQL Server is only $50 a month or so. That $520 a month does not include "fanatical support", and you also need to stack on top the costs of the host Windows server instance too. From what I can tell, Rackspace's "Cloud" products seem like like an cynical rebranding of an overpriced VPS service, but priced by the hour. I have the same confusion about Windows Azure which uses similar terms to describe the products available, but I think that's because Azure offers both traditional shared webhosting in addition to their own APIs you can target for scalable applications.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >