Search Results

Search found 68301 results on 2733 pages for 'file shredding'.

Page 202/2733 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Sharing files between Mac Mini and Ubuntu

    - by Dan
    I want to get files off a Ubuntu box (10.04) and transfer them to a Mac Mini. I am connecting the two directly with a piece of network cable (Its not crossover, but I am told the Mini can handle this). The Ubuntu box has Samba installed(I think), but no internet connection. When I create a shared folder in Ubuntu it doest show up in Mac OS. But the Ubunutu box is appearing in the 'Shared' section in Mac Finder. Help

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • What can I do to prevent my user folder from being tampered with by malicious software?

    - by Tom Wijsman
    Let's assume some things: Back-ups do run every X minutes, yet the things I save should be permanent. There's a firewall and virus scanner in place, yet there happens to be a zero day attack on me. I am using Windows. (Although feel free to append Linux / OS X parts to your answer) Here is the problem Any software can change anything inside my user folder. Tampering with the files could cost me my life, whether it's accessing / modifying or wiping them. So, what I want to ask is: Is there a permission-based way to disallow programs from accessing my files in any way by default? Extending on the previous question, can I ensure certain programs can only access certain folders? Are there other less obtrusive ways than using Comodo? Or can I make Comodo less obtrusive? For example, the solution should be proof against (DO NOT RUN): del /F /S /Q %USERPROFILE%

    Read the article

  • local cache for NAS or network folder

    - by HugoRune
    I am planning to build a network attached storage (NAS) server. Is there a way to cache frequently acccessed files from the remote storage automatically on the local PC? (I am not looking for a way to sync whole folders like rsync, but rather something that automatically and transparently caches the last accessed 50 gb of files.) Ideally I am searching for something that caches writes as well as reads, since only one pc will be accessing the server (and one day of lost changes if the local cache is damaged would be acceptable) I looked into windows offline files, but as far as I could tell this requires manual interaction to disconnect the server or go into offline mode in order to use the cache. The server would probably be running Linux or freeNAS, the pc runs Windows xp, but could be upgraded to 7 if required.

    Read the article

  • Is it Secure to Grant Apachie User Ownership of Directories & Files for Wordpress

    - by Oudin
    I'm currently setting up WordPress on an Ubuntu server 12 everything runs fine but there is an issue when it comes to automatically updating and uploading media via WP as Apache "www-data" user does not have permissions to write to the directories. "user1" has full permission All my directories have permissions of 0755 and files 644 my directories setup is as follows: /home/user1/public_html All WP files and directories are in "public_html" In order to work around the auto updating and uploading media I've granted Apache user ownership to the following directories sudo chown www-data:www-data wp-content -R sudo chown www-data:www-data wp-includes -R sudo chown www-data:www-data wp-admin -R I would like to know security wise how secure this is and if it is not secure what would be the best solution? That will allow me to keep all files and directories owned by user1 and still allow wp to be able to automatically update and uploading media

    Read the article

  • Restricting permissions to individual documents on SharePoint

    - by wahle509
    Here's what I'm trying to do: I would like to create a list of documents on a site in my company's SharePoint site. Each document should have specific user's permissions to view and edit it. For example: The list is for performance reports. John has his out there called "John_PR_09.docx". Only him and his supervisor should have permissions to view, edit, or do anything to it. And then another employee has hers out there with permissions for only her and her supervisor, and so on... I have tested this out with a document that I removed the groups and users from (since they inherit permissions from it's parent) and only gave my user account permissions to. I then asked someone else to try and open and she could, she even wrote "TEST" on the document and saved it. What am I doing wrong? I thought I stopped it from inheriting permissions from it's parent and only gave myself rights to edit it.

    Read the article

  • Enabling Samba Shares Across Subnets

    - by John
    I was curious how I could go about setting up SAMBA so that shares could be seen and used across different subnets. We have some Linux devices that are bound to Active Directory and we would like to have them serve SAMBA shares to clients that will reside in a different subnet than what the servers reside in? Is there any way to do this without needing to setup a WINS server or use legacy NetBIOS methods since the majority of our clients are Windows 7, Windows Server 2003, Windows Server 2008, and Macintosh OS X (10.6 or newer)? EDIT Right now, only clients in the same subnet as the SAMBA server can see the shares. Clients outside of the subnet (i.e. the client subnet) cannot see or connect to the share. The error returned is: The specified network name is no longer available. It does not seem to matter if I use IP, FQDN, or NetBIOS name to try and connect to the share with. We have a common Cisco router handling the inter-subnet routing. Everything else seems to work correctly with this network setup and the device can be pinged from multiple subnets. I also do not believe it to be a firewall type of issue since the rules for this segment are rather lax.

    Read the article

  • SAS: add comment to lst ouput file

    - by Dan
    In SAS, How do I add comments to my .LST output file. Like adding a comment saying "This is the output for tbl_TestMacro:" right before doing a proc print? So that my output file will read: This is the output for tbl_TestMacro: Obs field1 field2 1 6 8 2 6 9 3 7 0 4 7 1 Instead of just: Obs field1 field2 1 6 8 2 6 9 3 7 0 4 7 1 Thanks, Dan

    Read the article

  • Unable to copy files previously extracted from archives created on a Mac, even after claiming ownership

    - by Maxim Zaslavsky
    I reinstalled Windows on my computer today, and backed up my music to a USB drive. Now, I'm trying to copy the files onto my fresh Windows partition, but I'm unable to copy files that I obtained within my previous Windows installation from zip archives created on Macs. When I try to copy those previously-extracted files, I get an error saying that I need permission from S-1-5-21-...-1000 (a bizarre long ID). The first thing I tried was to take ownership of the files by setting my new user account as the owner, but that resulted in errors saying that I need permission from myself! Some Googling suggested adding antivirus suggestions, so I excluded the relevant folders from Microsoft Security Essentials, but the issue persists. For what it's worth, it seems that some program (so far I've only installed Chrome, Microsoft Security Essentials, and the latest Windows updates) created an empty folder named 601c8c7f0e0c03f725 at the root of my external USB hard drive. What gives?

    Read the article

  • Bash: Read lines in a file scenario with sed or awk

    - by user105566
    I have this scenarios: File Content: 10.1.1.1 10.1.1.2 10.1.1.3 10.1.1.4 I want sed or awk so that when i cat the file every time new line is returned. like First iteration: cat ip | some magic 10.1.1.1 Second iteration returns 10.1.1.2 Third iteration returns 10.1.1.3 Fourth iteration returns 10.1.1.4 and after n number of iterations, it returns to line 1 Fifth iteration returns: 10.1.1.1 Can we do it using sed or awk.

    Read the article

  • What is a good and safe way of sharing certificates?

    - by Kaustubh P
    I have a few certificates, that are used as authentication, to ssh into my servers on the Amazon cloud. I rotate those certificates weekly, manually. My question is, I need to share the certificates with some colleagues, a few on the LAN, and a few in another part of the country. What is the best practice to share the certificate? My initial thoughts were Dropbox and email. We dont host dedicated email servers with encryption and all, and dont have a VPN. Thanks.

    Read the article

  • Why is setuid ignored on directories?

    - by Blacklight Shining
    On Linux systems, you can successfully chmod u+s $some_directory, but instead of forcing the ownership of new subdirectories and files to be the owner of the containing directory (and setting subdirectories u+s as well) as you might expect, the system just ignores the setuid bit. Subdirectories and files continue to inherit the UIDs of their creating processes, and subdirectories are not setuid by default. Why is setuid ignored on directories, and how can I get the system to recognize it?

    Read the article

  • Excluding specific file types from a security audit in windows server 2008

    - by Mozez
    Hi, I am looking for a way to exclude specific file types from being logged in the security audits. I have a folder being audited for deletion events and the majority of logged events are .tmp files (such as a temp Word file that is automatically deleted when the app is closed) which I do not care about. Would anyone know of a way to exclude these types of files from being logged? Thanks in advance for any comments.

    Read the article

  • Unix Permissions issue with users belonging to the same group accessing a folder

    - by TK Kocheran
    I have a folder I'd really like to allow another user on this machine access to. I'm using mt-daapd to serve music to the network, so I'd like to enable the mt-daapd user to access my Music directory, /home/rfkrocktk/Music. The master user is rfkrocktk obviously. I've tried to set all of my permissions properly on the directory, but the mt-daapd user can't acces the files. I created a group called media-users and added both rfkrocktk and mt-daapd to it in order to give mt-daapd permission to simply read all of the files in that directory and subdirectories. If I run id on each of my users, here's what's displayed: $ id rfkrocktk > uid=1000(rfkrocktk) gid=1000(rfkrocktk) groups=1000(rfkrocktk),4(adm),20(dialout),24(cdrom),29(audio),46(plugdev),104(lpadmin),115(admin),120(sambashare),124(vboxusers),1001(jupiter),2002(media-users) $ id mt-daapd > uid=123(mt-daapd) gid=65534(nogroup) groups=65534(nogroup),2002(media-users) It definitely seems that both users are a part of the media-users group, so what could be going wrong? If I run ls -l on the actual Music directory to see its permissions, here's the output: drwxr-Sr-- 201 rfkrocktk media-users 12288 2011-01-13 12:26 Music If I run ls -l on the Music directory to get its children, here's the output: drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-12-20 15:31 2DBoy drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-05-25 12:50 ABBA drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Access Denied drwxr-Sr-- 10 rfkrocktk media-users 4096 2009-12-28 15:19 AC-DC drwxr-Sr-- 3 rfkrocktk media-users 4096 2009-12-28 15:19 Aerosmith drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-04 10:45 A Flock of Seagulls drwxr-Sr-- 4 rfkrocktk media-users 4096 2010-05-28 18:13 Alestorm drwxr-Sr-- 3 rfkrocktk media-users 4096 2010-06-22 23:29 Amon Amarth drwxr-Sr-- 5 rfkrocktk media-users 4096 2009-12-28 15:19 Anberlin ... From this, it would seem that I should be able to access the folders from mt-daapd, but I can't. Running sudo -i -u mt-daapd ls -l /home/rfkrocktk/Music displays nothing, indicating to me that for whatever reason, mt-daapd doesn't have access to read the folder. What am I doing wrong?

    Read the article

  • Where are essential Windows files located?

    - by Dorothy
    I am using a Vista but I would like the answer for XP, Vista and Windows 7. I am writing a program where I want to count the Important or Essential files of the Windows PC. It looks like the Essential files would be located somewhere in C:/Windows and after some research I found that some Essential files are located in C:/Windows/winsxs. What and where are the Essential files for a Windows PC? Is there a folder or set of folders that contain the essential files? Are all the files in C:/Windows/winsxs Essential? Essential Definition: Absolutely necessary; extremely important

    Read the article

  • Tomcat deploy error

    - by David
    When deploying an application with the Tomcat manager I get the following error: FAIL - Failed to deploy application at context path /prademo Tomcat log shows: INFO: HTMLManager: install: Installing context configuration at '/home//webapps/PRA/META-INF/context.xml' from '/home//webapps/PRA' java.io.FileNotFoundException: /home/dstefan/webapps/PRA/META-INF/context.xml (Permission denied) Permission to what? Both PRA and contex.xml have -rwxrwxrwx. Thanks!

    Read the article

  • Tutorial for Quick Look Generator for Mac

    - by vgm64
    I've checked out Apple's Quick Look Programming Guide: Introduction to Quick Look page in the Mac Dev Center, but as a more of a science programmer rather than an Apple programmer, it is a little over my head (but I could get through it in a weekend if I bash my head against it long enough). Does anyone know of a good basic Quick Look Generators tutorial that is simple enough for someone with only very modest experience with Xcode? For those that are curious, I have a filetype called .evt that has an xml header and then binary info after the header. I'm trying to write a generator to display the xml header. There's no application bundle that it belongs to. Thanks!

    Read the article

  • Transfer video off iPhone 3GS

    - by Zman771
    I recorded a bunch of video on my iPhone 3GS and want to copy it all to my computer without emailing each one individually to myself (in addition, some of the videos are too big to email). Is there an easy way to do this? (jailbreak options ok as well)

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >