Search Results

Search found 82718 results on 3309 pages for 'large file download'.

Page 206/3309 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Is there a BitTorrent client that can download files "in sequence"?

    - by Bob M
    Is there a BitTorrent client that can download files "in sequence"? For example, download clip 01.avi (high priority) clip 02.avi (normal priority) clip 03.avi (low priority) clip 04.avi (low) clip 05.avi (low) then when clip 01.avi is done, it will automatically make it: clip 01.avi (high priority) clip 02.avi (high priority) clip 03.avi (normal priority) clip 04.avi (low priority) clip 05.avi (low) this can be useful when download *.rar as well, since download clip.rar, and then clip.r00, r01, r02, in sequence can allow previewing the file by using RAR to recompose the file (even though incomplete file, but will allow previewing). Update: will making all files active still considered a bad use of BT?

    Read the article

  • Error splicing file: Input/output error with a USB SD/HC card reader [closed]

    - by PirateRussell
    I recently got a new Droid Bionic, and it has the SD/HC card. Today, I got a new USB card reader that reads the HC format. When I plug it into my Linux Mint 11 (katya), Gnome 32-bit computer, I get this error every I try to copy or move any file off of the card onto my desktop: Error splicing file: Input/output error I don't have the problem on a Windows Vista computer. Any ideas??? Thanks in advance...

    Read the article

  • VMWare Fusion Folder Sharing Not Working with Server

    - by Dave Long
    I have a Ubuntu Server running in VMWare Fusion 3.1.2 on my MacBook Pro for Java development and all my projects sit on my Mac in ~/Workspace/ColdFusion. I had ColdFusion/ shared with my VM through the VMWare tools, and it was working perfectly up until friday when the folder sharing just stopped. No updates on either mac or linux besides an iTunes update. I tried uninstalling the VMWare tools and reinstalling them but I get an error at the end of the install. It appears that when I reinstall the tools there are files left over from the old installation. Is there a way to force the unsinstall script to completely uninstall and remove all files for the VMWare-Tools? I know the shared folder used to mount at /mnt/hgfs/ColdFusion.

    Read the article

  • How does I/O work for large graph databases?

    - by tjb1982
    I should preface this by saying that I'm mostly a front end web developer, trained as a musician, but over the past few years I've been getting more and more into computer science. So one idea I have as a fun toy project to learn about data structures and C programming was to design and implement my own very simple database that would manage an adjacency list of posts. I don't want SQL (maybe I'll do my own query language? I'm just having fun). It should support ACID. It should be capable of storing 1TB let's say. So with that, I was trying to think of how a database even stores data, without regard to data structures necessarily. I'm working on linux, and I've read that in that world "everything is a file," including hardware (like /dev/*), so I think that that obviously has to apply to a database, too, and it clearly does--whether it's MySQL or PostgreSQL or Neo4j, the database itself is a collection of files you can see in the filesystem. That said, there would come a point in scale where loading the entire database into primary memory just wouldn't work, so it doesn't make sense to design it with that mindset (I assume). However, reading from secondary memory would be much slower and regardless some portion of the database has to be in primary memory in order for you to be able to do anything with it. I read this post: Why use a database instead of just saving your data to disk? And I found it difficult to understand how other databases, like SQLite or Neo4j, read and write from secondary memory and are still very fast (faster, it would seem, than simply writing files to the filesystem as the above question suggests). It seems the key is indexing. But even indexes need to be stored in secondary memory. They are inherently smaller than the database itself, but indexes in a very large database might be prohibitively large, too. So my question is how is I/O generally done with large databases like the one I described above that would be at least 1TB storing a big adjacency list? If indexing is more or less the answer, how exactly does indexing work--what data structures should be involved?

    Read the article

  • Save Files Directly from Your Browser to the Cloud in Chrome and Iron

    - by Asian Angel
    Are you looking for a quicker, easier way to upload files you find while browsing to your favorite cloud services? Skip saving files to your hard-drive and transfer them directly from your browser to your accounts using the Cloud Save extension. You can see the cloud services currently supported in the screenshot above and more are being added all the time. So if your favorite is not listed yet just keep checking in at the extension’s homepage. Cloud Save [Google Chrome Extensions] Latest Features How-To Geek ETC How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More Glowing Chess Set Combines LEDs, Chess, and DIY Electronics Fun Peaceful Alpine River on a Sunny Day [Wallpaper] Fast Society Creates Mini and Mobile Temporary Social Networks

    Read the article

  • Convert filenames to their checksum before saving to prevent duplicates. Is is a smart thing to do?

    - by Xananax
    TL;DR:what the title says I am developing some sort of image board in PHP. I was thinking of changing each image's filename to it's checksum prior to saving it. This way, I might be able to prevent duplicates. I know this wouldn't work for two images that are the same but differ in size or level of compression or whatnot, but this method would allow for an early check. What bugs me is that I never saw this method implemented anywhere, so I was wondering if there is a catch to it. Maybe it is just more efficient to keep the original filename and store the hash in DB? Maybe the whole method is just not useful and my question is moot? What do you think? On a side note, I don't really get how hashes are calculated so I was wondering, if my first question checks out, if it would be possible to calculate the likeness that two images are similar by comparing hashes (levenshtein or something of the sort).

    Read the article

  • Is there a rational reason to wait for the release date to download, install or update to the next version of Ubuntu?

    - by badp
    Today, October 6th 2010, Ubuntu 10.10 is in Feature Definition Freeze, Debian Import Freeze, Feature Freeze, User Interface Freeze, Beta Freeze, Documentation String Freeze, Final Freeze, Kernel Freeze and past the Translation Deadlines in both the non-language pack and language pack editions as the release schedule details. Basically, except for last minute bugfixes, the version of Ubuntu 10.10 you can download today is identical to the version of Ubuntu 10.10 you can download on the 10th when it gets released. If you downloaded and installed Ubuntu 10.10 today, you would: help find glaring issues for last minute fixing help defray the network load on October 10th see Ubuntu 10.10 in action without waiting Those sound like pretty strong arguments... to me, and indeed I've been using Ubuntu 10.10 for a month now roughly. However, most people prefer to make the jump with everybody else on release day. What are the rational reasons for that?

    Read the article

  • How to install pip/easy_install on debian 6 for python3.2

    - by atomAltera
    I'm trying to install pip or setup tools form python 3.2 in debian 6. First case: apt-get install python3-pip...OK python3 easy_install.py webob Searching for webob Reading http://pypi.python.org/simple/webob/ Reading http://webob.org/ Reading http://pythonpaste.org/webob/ Best match: WebOb 1.2.2 Downloading http://pypi.python.org/packages/source/W/WebOb/WebOb-1.2.2.zip#md5=de0f371b46554709ce5b93c088a11cae Processing WebOb-1.2.2.zip Traceback (most recent call last): File "easy_install.py", line 5, in <module> main() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1931, in main with_ei_usage(lambda: File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1912, in with_ei_usage return f() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1935, in <lambda> distclass=DistributionWithoutHelpCommands, **kw File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 368, in run self.easy_install(spec, not self.no_deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 608, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 638, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 799, in install_eggs unpack_archive(dist_filename, tmpdir, self.unpack_progress) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 67, in unpack_archive driver(filename, extract_dir, progress_filter) File "/usr/lib/python3/dist-packages/setuptools/archive_util.py", line 154, in unpack_zipfile data = z.read(info.filename) File "/usr/local/lib/python3.2/zipfile.py", line 891, in read with self.open(name, "r", pwd) as fp: File "/usr/local/lib/python3.2/zipfile.py", line 980, in open close_fileobj=not self._filePassed) File "/usr/local/lib/python3.2/zipfile.py", line 489, in __init__ self._decompressor = zlib.decompressobj(-15) AttributeError: 'NoneType' object has no attribute 'decompressobj' Second case: from http://pypi.python.org/pypi/distribute#installation-instructions python3 distribute_setup.py Downloading http://pypi.python.org/packages/source/d/distribute/distribute-0.6.28.tar.gz Extracting in /tmp/tmpv6iei2 Traceback (most recent call last): File "distribute_setup.py", line 515, in <module> main(sys.argv[1:]) File "distribute_setup.py", line 511, in main _install(tarball, _build_install_args(argv)) File "distribute_setup.py", line 73, in _install tar = tarfile.open(tarball) File "/usr/local/lib/python3.2/tarfile.py", line 1746, in open raise ReadError("file could not be opened successfully") tarfile.ReadError: file could not be opened successfully Third case: from http://pypi.python.org/pypi/distribute#installation-instructions tar -xzvf distribute-0.6.28.tar.gz cd distribute-0.6.28 python3 setup.py install Before install bootstrap. Scanning installed packages No setuptools distribution found running install running bdist_egg running egg_info writing distribute.egg-info/PKG-INFO writing top-level names to distribute.egg-info/top_level.txt writing dependency_links to distribute.egg-info/dependency_links.txt writing entry points to distribute.egg-info/entry_points.txt reading manifest file 'distribute.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' writing manifest file 'distribute.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py copying distribute.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/entry_points.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying distribute.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO creating 'dist/distribute-0.6.28-py3.2.egg' and adding 'build/bdist.linux-x86_64/egg' to it Traceback (most recent call last): File "setup.py", line 220, in <module> scripts = scripts, File "/usr/local/lib/python3.2/distutils/core.py", line 148, in setup dist.run_commands() File "/usr/local/lib/python3.2/distutils/dist.py", line 917, in run_commands self.run_command(cmd) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/install.py", line 73, in run self.do_egg_install() File "build/src/setuptools/command/install.py", line 93, in do_egg_install self.run_command('bdist_egg') File "/usr/local/lib/python3.2/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/usr/local/lib/python3.2/distutils/dist.py", line 936, in run_command cmd_obj.run() File "build/src/setuptools/command/bdist_egg.py", line 241, in run dry_run=self.dry_run, mode=self.gen_header()) File "build/src/setuptools/command/bdist_egg.py", line 542, in make_zipfile z = zipfile.ZipFile(zip_filename, mode, compression=compression) File "/usr/local/lib/python3.2/zipfile.py", line 689, in __init__ "Compression requires the (missing) zlib module") RuntimeError: Compression requires the (missing) zlib module zlib1g-dev installed Help me please

    Read the article

  • Slow transfer with memory stick (819 kb/s)

    - by Nrew
    What do I do to optimize the file transfer rate of a Memory Stick Duo? The file transfer was not like this when it was still new. Can reformatting give new life to a memory stick? It takes about 20 minutes just to transfer 1Gb of file from computer to memory stick. The computer is decent enough. 2.50Ghz processor, 2Gb ram.

    Read the article

  • Titanium SDK gives a "file too short" error

    - by Dananjaya
    I'm using Ubuntu 11.04 and recently installed Appcelerator Titanium Studio version 1.7 When I load up a demo project to run, I get an error like this, Couldn't load file:/home/dananjaya/.titanium/runtime/linux/1.1.0/libkhost.so, error: /home/dananjaya/.titanium/runtime/linux/1.1.0/libwebkittitanium-1.0.so.2: file too short Am I missing some dependencies here or is it a bug in the application? Thanks in advance.

    Read the article

  • What virus renames all images to EXE?

    - by user29373
    I have a virus that renames all jpg file extensions to EXE files and hide the original files at the same folder!! I can see hidden Files with FarManager and I cannot see them in Windows Explorer(even with show hidden files option?!!) How can I restore it to its original file extension? Do you have any tool to scan the converted file and restore it to its original file extension? What the virus name? how can I remove it manually?

    Read the article

  • I'm using OpenAL, trying to load a .ogg file and having .dll troubles

    - by Brendan Webster
    I'm using OpenAL for my game's music, and it loads .wav files by default, but to load in Ogg files I had to download and setup a few .dlls and lib files. I have fixed all errors with dlls except for this: I need vorbis.dll, and it says it's missing vorbis_window. I just can't find the dll anywhere online that includes the vorbis_window, anyone have suggestions on how I should fix this problem with my dll?

    Read the article

  • Why do large IT projects tend to fail or have big cost/schedule overruns?

    - by Pratik
    I always read about large scale transformation or integration project that are total or almost total disaster. Even if they somehow manage to succeed the cost and schedule blow out is enormous. What is the real reason behind large projects being more prone to failure. Can agile be used in these sort of projects or traditional approach is still the best. One example from Australia is the Queensland Payroll project where they changed test success criteria to deliver the project. See some more failed projects in this SO question Have you got any personal experience to share?

    Read the article

  • Modify game using external file

    - by Veehmot
    In Flash, for example, I can place an xml file along with the binary, then if I modify some variable the game will change for everyone. How to achieve something like that in Android? I know that for every change I make to the game, the player would need to download a new update. But the main goal I'm looking for, is modifying a game stats without the need for recompile the entire APK. I'm working with Haxe+OpenFL.

    Read the article

  • how to save a gtktextbuffer content in file

    - by user1565593
    i tried to save sengtktextbuffer content in a file. my code seens working but i have a problem in file. some characters are unreadable in outfile outfile my code: def on_save_clicked(self, widget, data=None): start = self.textbuffer.get_start_iter() end = self.textbuffer.get_end_iter() this = self.textbuffer.get_text(start, end, False) format = self.textbuffer.register_serialize_tagset(this) data = self.textbuffer.serialize(self.textbuffer, format, start, end) outfile = open("/home/christophe/toto.txt", "w") outfile.write(data) outfile.close() what is wrong in my code? thanks for your help

    Read the article

  • How to grep the output of youtube-dl?

    - by mohtaw
    The normal output for youtube-dl is the following [download] Downloading video #3 of 33 [youtube] WbWb0u8bJrU: Downloading webpage [youtube] WbWb0u8bJrU: Downloading video info webpage [youtube] WbWb0u8bJrU: Extracting video information [download] Resuming download at byte 107919109 [download] Destination: Lec 6.mp4 [download] 86.2% of 137.18MiB at 48.80KiB/s ETA 06:37 I need to show the first and last monitor the downloading I use the command youtube-dl -cit -f 18 URL | grep -e ETA -e "Downloading video #" It's not working only the first line is working while the last line is not, and I see the download is running as the file size grows

    Read the article

  • thunar-archive-plugin not working

    - by Sergio
    After I experienced serious not yet resolved performance issues with Nautilus I decided to move to XUbuntu so I installed its metapackage from Ubuntu and started using it. It turns out that the archive plugin for Thunar (provides the "Extract here" option in the contextual menu when right clicking over a compressed archive) is not working, even after I apt-get purged it and reinstalled. It simply doesn't show its options in the contextual menu. What should I do to make it work?

    Read the article

  • How can I easily retab html files according to some sane default?

    - by James
    I have some html files that I'd like to retab that look like this: <header> <div class="wrapper"> <img src="images/logo.png"> <div class="userbox"> <div class="welcome">Welcome Andy!</div> <div class="blackbox"> <ul> <li><a href="#">Invite Friends</a></li> <li><a href="#">My Account</a></li> <li><a href="#">Cart</a></li> <li><a href="#">Sign Out</a></li> </ul> </div> </div> </div> </header> And I want them to look something like this: <header> <div class="wrapper"> <img src="images/logo.png"> <div class="userbox"> <div class="welcome">Welcome Andy!</div> <div class="blackbox"> <ul> <li><a href="#">Invite Friends</a></li> <li><a href="#">My Account</a></li> <li><a href="#">Cart</a></li> <li><a href="#">Sign Out</a></li> </ul> </div> </div> </div> </header> Or some sane default. What's the easiest way to go about doing this from the terminal in ubuntu for all of the html files in the current directory?

    Read the article

  • SWF file not playing after being published

    - by rsquare
    I'm trying to run the "connector" example that comes bundled with the SmartFoxServer 2X downloads.. There it connects to the server and loads the correct configuration file. When I run it in Adobe Flash Professional 5, it runs correctly and connects to the server but after being published as SWF movie, it doesnt work. It loads the configuration file but can't connect and gives an error connection failure: ERROR 2048. This is the example I'm talking about.

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • You Couldn't Write it - Houston we have a problem!

    - by GrumpyOldDBA
    Note identities changed to protect the innocent (sic ). In a datacentre I have an iscsi san which provides storage for a SQL Cluster. It developed a fault and required replacement of a few parts, all hot swappable. Although we had suppport/warranty this did not include onsite so we arranged to have the parts delivered. The datacentre did not want to carry out the work so we had to arrange for the manufacturer to send an engineer. Times were arranged and interested/concerned parties put on standby...(read more)

    Read the article

  • Is there a taskbar applet to show the status of a remote host?

    - by Mathew
    At the end of the day I would like to be able to copy files to my home PC just in case I feel inspired to work on them in the evening. But I only want to do this if the PC is on already. (I can remote wake-on-lan the PC but I don't want to always be doing that). I would like some taskbar applet that shows the status of the PC and whether I can ssh into it or not. Obviously it would also be interesting to have an idea as to how long it is on for whilst I am at work as that gives a good indication of whether anyone is in or not. However being able to unobtrusively copy files to the remote machine is the main objective. Perhaps another approach is to run rsync on cron and if the remote host is not up then I guess it will fail. Is that correct? If anyone else has ideas on how to best sync a work and home PC then please do tell.

    Read the article

  • Unable to change user password in Ubuntu 12.10

    - by Laphanga
    The User Password is not changing for some reason. In the terminal it says password updated successfully, $ sudo passwd [sudo] password for zaigham: Enter new UNIX password: Retype new UNIX password: passwd: password updated successfully But when I try to log in using the new password it doesn't work. I have changed my password 2, 3 times now but still it's the same. Is that some kind of bug?

    Read the article

  • Can't access Ubuntu's shared folders from Windows 7

    - by endolith
    In Ubuntu Maverick, I've shared some folders using the Nautilus "Sharing Options" GUI. I can see them from Windows 7, but when I try to access them (from Windows) it asks for a username and password. No matter what I enter, it won't let me in. How do I configure this to share normally? Update: I've found that some of the shared folders let me in, but others don't. Of the ones that do, some of their subfolders do, others don't, etc. How can I investigate what's causing this?

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >