Search Results

Search found 20016 results on 801 pages for 'download manager'.

Page 56/801 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Download all image or create zip file of all uploads from the gallary contained uploads

    - by Arpit Vaishnav
    I am on the photo sharing site , and i want to give functionality to download all the images available in the gallery ,, I have taken gallery in a relation where i can get all the iamges by @gallery.uploads , Now what i want is to download this all files , or if its possible to create any zipfile so that we can download that one file containing uploads inside the gallery , thanks

    Read the article

  • Slow download speeds on MacBook Pro

    - by Austin
    Just as the title says, I am getting very low download speeds on my MacBook Pro. I did a speed test at speedtest.net, and am getting 7 MbPS down, .5 up. However, I can only seem to get 270 KB PS max (averaging 100 K), whether on my school's network or on my home network, wired or wireless. I am on Mac OS X 10.5.8, with Google Chrome. My ethernet settings (under System Preferences - Network - Ethernet Connection - Advanced - Ethernet) are set to "Configure Automatically", "Speed: 100TX", "Duplex: full-duplex, flow-control", and "MTU: Standard (1500)". As far as I can tell, there are no throttles or anything between here and the ISP, so... Any ideas on why I'm getting such low download speeds?

    Read the article

  • A decent S3 bucket manager for Ubuntu

    - by Luke
    I'm looking for a decent S3 bucket manager for Ubuntu (Gnome). I prefer it to integrate with Nautilus so it will look like just any other drive (a la WebDAV) but so far I haven't been able to find anything that I'd like to use on a daily basis. What bucket managers do you use for Ubuntu or what bucket manager would you recommend? UPDATE: S3FS seems to be what I'd really want to use since it lets me integrate my buckets directly into my file-system. However, when trying S3FS I do not get the impression that it's ready for prime time. I'm stunned by the fact that there are no decent bucket managers out there for Ubuntu/Gnome, guess I have to build it myself...

    Read the article

  • Job queue manager with RPC interface

    - by admr
    I need a job queue manager that I can control over the Internet. It should be able to execute and stop processes, check on their status (ideally notice and execute some code when a process exits), respond to commands and also be able to report back to a server. Background: I have a GWT application that allows to create jobs to execute on a cloud instance (currently EC2). I want to push a "job packet" (data for a process to operate on etc) to S3, start a Linux EC2 instance (or use one that's already running), and tell a job manager on the instance to execute that job (possibly parallel to other jobs). It should then pull the "job packet" from S3, run a process that operates on that data and report back to the server that is running the server part of my GWT application with some information (e.g. exit code, stdout, stderr). If I have to write e.g. stdour/err to a file from the process and read that file, that's OK too. I would really like the manager to be "close" to the processes it runs, meaning I want to avoid using something like Runtime.exec from the JDK. It seems like I would have to do that if I used Quartz for example. I'm fine with the calls in both directions being asynchronous. I'm fine with any reasonable technology for the calls as long as I can easily build an interface for that in my GWT server side (e.g. HTTP requests to a servlet over SSL would be nice and trivial). The job manager does not need to have a very sophisticated queueing system. Running several processes either sequentially or in parallel should be fine. Determining how much compute time a process received during its lifetime would be nice (AFAIK, this might be challenging). I did not yet find any existing software that does this, including http://java-source.net/open-source/job-schedulers. I suspect I might have to build an RPC interface (with authentication etc, of course) around a job manager; maybe use something like Apache Commons Exec. In that case, I would prefer Java or Python for the job manager part. I would be happy to hear suggestions for either the former or latter scenario!

    Read the article

  • cant download youtube video

    - by dsaccount1
    I'm having trouble retrieving the youtube video automatically, heres the code. The problem is the last part. download = urllib.request.urlopen(download_url).read() # Youtube video download script # 10n1z3d[at]w[dot]cn import urllib.request import sys print("\n--------------------------") print (" Youtube Video Downloader") print ("--------------------------\n") try: video_url = sys.argv[1] except: video_url = input('[+] Enter video URL: ') print("[+] Connecting...") try: if(video_url.endswith('&feature=related')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=related')[0] elif(video_url.endswith('&feature=dir')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=dir')[0] elif(video_url.endswith('&feature=fvst')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=fvst')[0] elif(video_url.endswith('&feature=channel_page')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=channel_page')[0] else: video_id = video_url.split('www.youtube.com/watch?v=')[1] except: print("[-] Invalid URL.") exit(1) print("[+] Parsing token...") try: url = str(urllib.request.urlopen('http://www.youtube.com/get_video_info?&video_id=' + video_id).read()) token_value = url.split('video_id='+video_id+'&token=')[1].split('&thumbnail_url')[0] download_url = "http://www.youtube.com/get_video?video_id=" + video_id + "&t=" + token_value + "&fmt=18" except: url = str(urllib.request.urlopen('www.youtube.com/watch?v=' + video_id)) exit(1) v_url=str(urllib.request.urlopen('http://'+video_url).read()) video_title = v_url.split('"rv.2.title": "')[1].split('", "rv.4.rating"')[0] if '&quot;' in video_title: video_title = video_title.replace('&quot;','"') elif '&amp;' in video_title: video_title = video_title.replace('&amp;','&') print("[+] Downloading " + '"' + video_title + '"...') try: print(download_url) file = open(video_title + '.mp4', 'wb') download = urllib.request.urlopen(download_url).read() print(download) for line in download: file.write(line) file.close() except: print("[-] Error downloading. Quitting.") exit(1) print("\n[+] Done. The video is saved to the current working directory(cwd).\n")

    Read the article

  • Recover open but deleted file on Linux using ln instead of cp

    - by Yang
    Say I have a file that's downloading (from a source that's hard to re-download from), but accidentally deleted from the filesystem namespace (/tmp/blah), and I'd like to recover this file. Normally I could just cp /proc/$PID/fd/$FD /tmp/blah, but in this case that would only get me a partial snapshot, since the file is still downloading. Furthermore, once the download completes, the downloading process (e.g. Chrome) will close the FD. Any way to recover by inode/create a hard link? Any other solutions? If it makes any difference, I'm mainly concerned with ext4. Thanks in advance.

    Read the article

  • I can't find Terminal Services Manager

    - by nickfranceschina
    I am obviously stupid... because I'm logged into this Windows 2003 Server box as administrator, yet when I go under Administrative Tools, I can't find "Terminal Services Manager". Terminal Services are installed... I'm using them right now... and I know that's where the manager is supposed to be... yet it is not what am I missing? why isn't it showing? where else can I find it?

    Read the article

  • Looking for a web based, embeddable file manager

    - by Kristi H.
    I'm looking for a web based, embeddable file manager. I haven't found anything suitable through Google or the Stack Overflow archives. Does anyone know of a file manager that meets the following criteria? The file manager must… …be embeddable (e.g. Flash or a Java applet) …run from my server (no storing uploads in a remote location) …be licensed for business use (fees are okay) …let me disable uploads. Users shouldn't be able to upload files, only download them …allow me to tag files and/or place them in folders …let me disable authentication or handle it from the backend. My app already has a login system and I don't want to make users log in twice …allow users to select and download multiple files …have an attractive interface The file manager should… …have an external config file …display thumbnail previews for files …be skinnable …be actively developed or at least actively maintained Thanks for your help. I need a sophisticated file manager and I'd like to avoid writing it from scratch.

    Read the article

  • why i cannot download jdk from oracle web site directly without AuthParam?

    - by hugemeow
    that is download with the following command, why it fails to download that file? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin the following command works, but that AuthParam may not work after a while, why? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 how this AuthParam option is implemented? why i cannot download without this parameter? and why i can only get this parameter using explorer? is rewrite used in the oracle server when deal with wget request? why the same command not works after an hour, does the value of AuthParam expired? so how the server check whether the value of AuthParam is expired? wget http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 --2012-09-07 03:51:01-- http://download.oracle.com/otn-pub/java/jdk/6u35-b10/jdk-6u35-linux-i586.bin?AuthParam=1346955572_27e44512fe8ef5cb920c4c329e5f0fd8 Resolving download.oracle.com... 23.67.251.50, 23.67.251.57 Connecting to download.oracle.com|23.67.251.50|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2012-09-07 03:51:01 ERROR 403: Forbidden. @KJ-SRS is that kind of CGI program which is used to judge if AuthParam is right? is that possible to download jdk package purely using wget command, and no need to get that AuthParam in explorer

    Read the article

  • Apache/PHP serving file multiple times

    - by easement
    I have a system with a download.php page. The page takes and id and loads a file based on from the DB Record and then serves it up. I've noticed a couple instances where files are requested multiple times in short time spans (20ms). Times that are too quick for human input. There are plenty of instances where the downloader functions fine. However, in taking a closer look at the downloader’s usage, I did see some interesting behavior. For instance, the IP address xxx.xxx.xxx.xxx (which is one in a range owned by xxxxxx.de in Germany) came to the site through Google. They browsed around and then came to the page http://site.com/xxxx/press+125.php There they issued a request for /download.php?id=/ZZ/n+aH55Y= (a PDF) at 9:04:23AM. That alone is not a big deal. However, what is interesting is that the server seems to have been quite preoccupied with serving that request. In the logs the request first completes between 9:09:48 and 9:10:00. It looks like the user must have gotten tired of waiting during that time and requested the document two more times. Between 09:14:47 and 09:15:00 the same request appears again, except it is from 9:04:43AM, 20ms later than the first request. Then it pops up a third time, with a request that started at 09:05:06 completing between 09:19:55 and 09:19:58! I’m suspicious of that document. In looking through the logs I see other instances where it takes the server a little while to handle that specific file. Check out this list of requests from zzz.zzz.zzz.zzz[different than above] for the file /download.php?id=/ZZ/n+aH55Y= (the same docuemnt as before): Request time Complete Time 04:32:43 04:33:36 04:32:50 04:33:36 04:32:51 04:33:38 04:33:05 04:33:38 04:33:34 04:33:42 04:33:05 04:33:42 So something is definitely going on. Whether it has to do with this specific document tripping up the server, the download.php page’s code, or if we’re just seeing the evidence of some server level overload as it plays out in real time I’m not yet sure. In fairness, there are other instances of people downloading /download.php?id=/ZZ/n+aH55Y= (the same PDF) without error. However, it is interesting that the multiple processes only seem to happen with this one file, and then only when it is accessed through the page http://site.com/press+125.php . It bears further investigation if there’s something amiss inside the code that causes the system to fire off multiple download requests that occupy the server. I don't know if this press+125.php is a rabbit hole, but there is weird consicence. Any ideas? I'm totally out of ideas. Apache maxed out? Things like that. ///DOWNLOAD.php $file = new files(); $file->comparison_filter("id", "=", $id); //sql to load if ($file->load()) { $file->serve(); } //FILES function serve() { if ($this->is_loaded) { if (file_exists($this->get_value("filename"))) { if ($this->get_value("content_type") != "") { header("Content-Type: " . $this->get_value("content_type")); } header("Content-Length: " . filesize($this->get_value("filename"))); if ($this->get_value("flag_image") == 0 || $this->get_value("flag_image") == false) { header("Cache-Control: private"); header("Content-Disposition: attachment; filename=" . urlencode($this->get_value("original_filename"))); } set_time_limit(0); @readfile($this->get_value("filename")); exit; } } }

    Read the article

  • Why do Cisco IOS routers hang in the middle of large downloads?

    - by cjavapro
    After a few years in use. We have seen Cisco 871 and 851 routers that would hang if you had a single download that was more than 100M large. It is intermittent. Sometimes the problem goes away, sometimes it happens on very small downloads (just a 10KB web page). It seems that the just about all the downloads eventually finish, but the bigger the download the longer the hang. Is there a way to resolve this? (short of router replacement which is what we have been doing)

    Read the article

  • Joomla 1.5 Media Manager sets incorrect file permissions when uploading

    - by Scott Mayfield
    Howdy all, I have a Joomla 1.5 installation running on Windows Server 2008, installed via the Web Platform Installer. When uploading images with the media manager (native uploader, not the flash bulk uploader), the files arrive on the server correctly, but are given incorrect permissions. Specifically, the IIS_IUSRS group is not given access to the file. I might be incorrect about what group/user is SUPPOSED to get access to the files, but so far, I've found that unless I give IIS_IUSRS access to the uploaded files, they won't appear on the site or in the media manager (appear as broken images). Once I give IIS_IUSRS permission to the files, they work fine. So far, all the research I've done has led me to linux specific fixes that involve either changing the umask on the server, or directly modifying the Joomla codebase to add an appropriate chmod command to the upload process, but I really don't want to modify Joomla directly. I have to believe there's a setting here somewhere that will do the job, either on the Joomla or Windows side of the equation. Any thoughts? Scott

    Read the article

  • Downloaded chm is blocked, is there a solution?

    - by David Rutten
    CHM files that are downloaded are often tagged as potentially malicious by Windows, which effectively blocks all the html pages inside of it. There's an easy fix (just unblock the file after you download it), but I was wondering if there's a better way to provide unblocked chm files. What if I were to download the chm file (as a byte stream) from our server inside the application, then write all the data to a file on the disk. Would it still be blocked? Is there another/better way still? Edit: Yes, downloading the file using a System.Net.WebClient does solve the problem. But, is there still a better way?

    Read the article

  • Weblogic / EjbGen: worker manager configuration.

    - by Guillaume
    I want to declare a worker manager to perform some work in managed thread. Weblogic documentation tells that we can declare a global worker manager using the admin console or declare it in an ejb-jar.xml config file. I want to use the second option. But my ejb-jar.xml is generated by the ejbgen tool. There is no tag in ejbgen that would allow me to declare a worker manager. So how should I create a local worker manager declaration ?

    Read the article

  • KDE on Windows won't download/install

    - by endolith
    No matter which mirror I use, I get this error. I've tried a few times over the last several weeks. Download failed --------------------------- The download of ftp://kde.mirrors.tds.net/pub/kde/stable/4.5.4/win32/libopensp-vc100-1.5.2-bin.tar.bz2 failed with error: archive downloaded from ftp://kde.mirrors.tds.net/pub/kde/stable/4.5.4/win32/libopensp-vc100-1.5.2-bin.tar.bz2 checksum error --------------------------- Retry Ignore Cancel Should I just ignore this and let it continue? Update: I ran as administrator, changed the install directory to C:\KDE, and ignored this error, and it seemed to install, but then gave me a different error, same file: Error --------------------------- Internal Error - File C:/Temp/KDE/libopensp-vc100-1.5.2-bin.tar.bz2 does not exist --------------------------- Cancel But now programs seem to work! Should I just ignore this error? I can't even find a plain English explanation of what libopensp is.

    Read the article

  • BITS http download job fails to connect for owner Local SYSTEM account

    - by MikeT
    A service I have written that uses BITS (Background Intelligent Transfer Service) to auto update itself is having a problem on some machines (Windows 7 so far). I have been investigating and have discovered that some of the jobs that my service adds to the bits queue are failing immediately with the error code 0x80072efd (a connection with this server could not be established). The is not problem with connecting to the server for the download as it works fine on the same machine using IE (or any other web browser) and other clients can connect and update from the same server. I tried using the BITSADMIN.exe tool to add the jobs manually and they worked ok. I then changed the account my service was running under to the network service account so the bits jobs would be created with a different owner and the jobs completed successfully. My question is I don't want to run my service as this account as it wont have the required local permissions, so how to I change the permissions of the local system user to allow it to download from the HTTP source, I'm not aware of any way of this being restricted for this account but it obviously is.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >