Search Results

Search found 975 results on 39 pages for 'uploads'.

Page 1/39 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Download all image or create zip file of all uploads from the gallary contained uploads

    - by Arpit Vaishnav
    I am on the photo sharing site , and i want to give functionality to download all the images available in the gallery ,, I have taken gallery in a relation where i can get all the iamges by @gallery.uploads , Now what i want is to download this all files , or if its possible to create any zipfile so that we can download that one file containing uploads inside the gallery , thanks

    Read the article

  • Google I/O 2010 - YouTube API uploads: Tips & best practices

    Google I/O 2010 - YouTube API uploads: Tips & best practices Google I/O 2010 - YouTube API uploads: Tools, tips, and best practices Google APIs 201 Jeffrey Posnick, Gareth McSorley, Kuan Yong Are you integrating YouTube upload functionality into your mobile, desktop, or web app? Learn about Android and iPhone upload best practices, resuming interrupted YouTube uploads, and the YouTube Direct embeddable iframe for soliciting uploads on your existing web pages. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 11 0 ratings Time: 55:27 More in Science & Technology

    Read the article

  • Django file uploads - Just can't work it out

    - by phoebebright
    OK I give up - after 5 solid hours trying to get a django form to upload a file, I've checked out all the links in stackoverflow and googled and googled. Why is it so hard, I just want it to work like the admin file upload? So I get that I need code like: if submitForm.is_valid(): handle_uploaded_file(request.FILES['attachment']) obj = submitForm.save() and I can see my file in request.FILES['attachment'] (yes I have enctype set) but what am I supposed to do in handle_uploaded_file? The examples all have a fixed file name but obviously I want to upload the file to the directory I defined in the model, but I can't see where I can find that. def handle_uploaded_file(f): destination = open('fyi.xml', 'wb+') for chunk in f.chunks(): destination.write(chunk) destination.close() Bet I'm going to feel really stupid when someone points out the obvious!

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • Uploading.to Uploads Files to Multiple File Hosts Simultaneously

    - by Jason Fitzpatrick
    If you’re looking to quickly share a file across a variety of file hosting services, Uploading.to makes it a cinch to share up to 10 files across 14 hosts. The upload process is simple. Visit Uploading.to, select your files, check the hosts you want to share the file across (by default all 14 are checked), add a description to the collection, and hit the Upload button. Uploading.to will upload your file to the various hosts; during the process you’ll see which hosts are confirmed and which have failed. We had 2 failures among the 14 hosts which still left the file mirrored across a sizable 12 host spread–not bad at all. When you’re ready to share the file hit the Copy Link button at the bottom of the screen and share it with your friends. They’ll be directed to Uploading.to and will be able to select from any of the hosts the file was successfully mirrored across. Uploading.to is a free service and requires no registration. Uploading.to [via Addictive Tips] HTG Explains: Do You Really Need to Defrag Your PC? Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive

    Read the article

  • Reverse Proxy that does not buffer uploads

    - by tsuraan
    From what I've seen of various reverse proxies (nginx, apache, varnish), they seem to buffer file uploads to disk before handing them off to the service they're proxying for. I need a reverse proxy that doesn't do this; I have a system that handles uploads itself, and buffering uploaded files to disk is not something that works for me. Does anybody know of a proxy server that can be configured to just pass traffic through to the proxied services without doing any buffering to disk?

    Read the article

  • Limit on FileReference uploads?!

    - by Rudy
    Hello, I am currently uploading files in ActionScript 3 using the upload() method of the FileReference class. I built an uploader than can do simultaneous or parallel uploads, having a variable set the number of maximum uploads at a time. I noticed that for Internet Explorer I could be uploading 10 or more files simultaneously, but FireFox and Safari seems to cap the number of uploads to 2. That is, when I call the upload method on per say, 3 files, only 2 will get events back (such as ProgressEvent.PROGRESS). Only when one of the 2 uploads finishes, then the 3rd one will start. This behavior does not happen for Internet Explorer. I have tried with a large number of files, and some big files, to make sure this behavior was consistent. I was wondering if anyone noticed this behavior please, and if so, what is the reason for this behavior please? I appreciate your help, Thank you very much, Rudy

    Read the article

  • Owner of uploads directory is `www-data` but this prevents FTP access via PHP scripts

    - by letseatfood
    To allow write access to Apache, I needed to chown www-data:www-data /var/www/mysite/uploads to my site's upload folder. This allows me to delete files from the folder via unlink() in a PHP script. Unfortunately, this prevents another PHP script, which uses FTP functions, from working. I think it is because the FTP user is mike and now that the uploads directory is owned by www-data, mike cannot access it. I added mike to the group www-data, but this does not fix the issue. Can somebody advise me on how to allow PHP FTP functions to work in addition to file deletion using PHP's unlink() function?

    Read the article

  • Make user uploads go to different hard drive?

    - by Andrew Fashion
    I am using a pre-made social networking script where all user uploads go to site.com/public/user/ How can I make /public/user/ my secondary hard drive so all user uploads are uploaded to my second harddrive and not the primary hard drive. I have over 100GB of images, and I want them on my other HDD now. Thank you. I am running CentOS 5.5 64bit w/ Apache and PHP I have two 250GB Sata HDDs sudo parted /dev/sda print Model: ATA WDC WD2500KS-00M (scsi) Disk /dev/sda: 250GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 8595MB 8488MB primary linux-swap 3 8595MB 10.7GB 2147MB primary ext3 4 10.7GB 250GB 239GB extended 5 10.7GB 250GB 239GB logical ext3 Information: Don't forget to update /etc/fstab, if necessary. 5 10.7GB 250GB 239GB logical ext3

    Read the article

  • WebDav uploads fail on files with certain characters on Apache

    - by bnferguson
    Have webdav uploads working great on one our boxes but anytime there is a ; # or * (and maybe a few others) the upload fails. That is expected since they're restricted characters but I'm curious if there's a way to rewrite/rename those files on their way through. We don't care what the name is really it just has to make it up to the server. Started looking at mod_rewrite solutions but my rewrite fu is rather weak.

    Read the article

  • TFTP uploads failing

    - by dunxd
    I am running TFTPD via xinetd on a Centos 5.4 server. I am able to access files via tftp fine, so I know the service is running ok. However, whenever I try and upload a file I get a 0 Permission denied message. I have already created the file in /tftpboot and set the permissions to 666. My tftpd config has verbose logging (-vvvv), but all I see in my /var/log/messages is: START: tftp pid=20383 from=192.168.77.4 I have seen some mention that SELinux can prevent TFTPD uploads, but I'd expect to see something in the logs. I have SELinux set in permissive mode. Any ideas?

    Read the article

  • php.ini settings change not taking effect for large file uploads

    - by user51347
    My server was just reprovisioned, and my application which uploads large (100M+) files now breaks upon re-installation. the symptom is quite consistent : smaller files (8mb in my tests) upload just fine. Larger files cut off at quite close to the same point every time given a particular computer. a file that fails at 26% will fail at just about the same every time. One that takes 1:40s to fail will take within 2 seconds of that every time, before failure. I have set my php.ini settings extravagently : post_max_size = 512M upload_max_filesize = 512M max_input_time = 3600 max_execution_time 3600 Is there possibly a setting at the Apache Level which would override PHP?

    Read the article

  • Queuing asynchronous HTTP file uploads

    - by David Parunakian
    Is there a way to queue file uploads without resorting to Flash or Silverlight, just with cleverly used forms and JavaScript? Note that the upload should be executed asynchronously. By "queuing" uploads I mean that if the user tries to upload multiple files, they should not be transferred simultaneously, but rather one at a time, in a single HTTP connection.

    Read the article

  • Zero downtime uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Interruptionless Uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Uploads fail with shorewall enabled

    - by JamesArmes
    I have an Ubuntu 8.04 server with shorewall 4.0.6 installed. When I try to upload files using FTP, SCP, or cURL the file upload stalls almost immediatly and eventually times out. If I turn off shorewall then the uploads work fine. I don't have any rules that specifically allow FTP and I'm not too concerned with it, but I do need to be able to upload via 22 (SCP) and 80 & 443 (cURL). This is what my rules look like: COMMENT Allow Server to respond to any web (80) and SSL (443) requests ACCEPT net $FW tcp 80 ACCEPT $FW net tcp 80 ACCEPT net $FW tcp 443 ACCEPT $FW net tcp 443 COMMENT Allow Server to respond to SNMPD (161) requests ACCEPT net $FW udp 161 COMMENT Allow Server to respond to MySQL (3306) requests (for MySQL Graphing) ACCEPT net $FW tcp 3306 COMMENT Allow Server to respond to any SSH connection attempts, and to SSH out. SSH/ACCEPT net $FW SSH/ACCEPT $FW net COMMENT Allow Server to make DNS Requests out. DNS/ACCEPT $FW net COMMENT Default "close" anything else. Ping/REJECT net $FW ACCEPT $FW net icmp #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE I expected the top four ACCEPT lines to allow inbound and outbound traffic over 80 and 443 and I expected the two SSH/ACCEPT lines to allow inbound and outbound trffic over 22, including SCP. Any help is greatly appreciated. /etc/shorewall/policy contains the following (all lines above are commented out): # # Allow all connection requests from teh firewall to the internet # $FW net ACCEPT # # Policies for traffic originating from the Internet zone (net) # Drop (ignore) all connection requests from the Internet to the firewall # net all DROP info # THE FOLLOWING POLICY MUST BE LAST # Reject all other connection requests all all REJECT info #LAST LINE -- ADD YOUR ENTRIES ABOVE THIS LINE -- DO NOT REMOVE

    Read the article

  • NSURLSession and amazon S3 uploads

    - by George Green
    I have an app which is currently uploading images to amazon S3. I have been trying to switch it from using NSURLConnection to NSURLSession so that the uploads can continue while the app is in the background! I seem to be hitting a bit of an issue. The NSURLRequest is created and passed to the NSURLSession but amazon sends back a 403 - forbidden response, if I pass the same request to a NSURLConnection it uploads the file perfectly. Here is the code that creates the response: NSString *requestURLString = [NSString stringWithFormat:@"http://%@.%@/%@/%@", BUCKET_NAME, AWS_HOST, DIRECTORY_NAME, filename]; NSURL *requestURL = [NSURL URLWithString:requestURLString]; NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:requestURL cachePolicy:NSURLRequestReloadIgnoringLocalAndRemoteCacheData timeoutInterval:60.0]; // Configure request [request setHTTPMethod:@"PUT"]; [request setValue:[NSString stringWithFormat:@"%@.%@", BUCKET_NAME, AWS_HOST] forHTTPHeaderField:@"Host"]; [request setValue:[self formattedDateString] forHTTPHeaderField:@"Date"]; [request setValue:@"public-read" forHTTPHeaderField:@"x-amz-acl"]; [request setHTTPBody:imageData]; And then this signs the response (I think this came from another SO answer): NSString *contentMd5 = [request valueForHTTPHeaderField:@"Content-MD5"]; NSString *contentType = [request valueForHTTPHeaderField:@"Content-Type"]; NSString *timestamp = [request valueForHTTPHeaderField:@"Date"]; if (nil == contentMd5) contentMd5 = @""; if (nil == contentType) contentType = @""; NSMutableString *canonicalizedAmzHeaders = [NSMutableString string]; NSArray *sortedHeaders = [[[request allHTTPHeaderFields] allKeys] sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)]; for (id key in sortedHeaders) { NSString *keyName = [(NSString *)key lowercaseString]; if ([keyName hasPrefix:@"x-amz-"]){ [canonicalizedAmzHeaders appendFormat:@"%@:%@\n", keyName, [request valueForHTTPHeaderField:(NSString *)key]]; } } NSString *bucket = @""; NSString *path = request.URL.path; NSString *query = request.URL.query; NSString *host = [request valueForHTTPHeaderField:@"Host"]; if (![host isEqualToString:@"s3.amazonaws.com"]) { bucket = [host substringToIndex:[host rangeOfString:@".s3.amazonaws.com"].location]; } NSString* canonicalizedResource; if (nil == path || path.length < 1) { if ( nil == bucket || bucket.length < 1 ) { canonicalizedResource = @"/"; } else { canonicalizedResource = [NSString stringWithFormat:@"/%@/", bucket]; } } else { canonicalizedResource = [NSString stringWithFormat:@"/%@%@", bucket, path]; } if (query != nil && [query length] > 0) { canonicalizedResource = [canonicalizedResource stringByAppendingFormat:@"?%@", query]; } NSString* stringToSign = [NSString stringWithFormat:@"%@\n%@\n%@\n%@\n%@%@", [request HTTPMethod], contentMd5, contentType, timestamp, canonicalizedAmzHeaders, canonicalizedResource]; NSString *signature = [self signatureForString:stringToSign]; [request setValue:[NSString stringWithFormat:@"AWS %@:%@", self.S3AccessKey, signature] forHTTPHeaderField:@"Authorization"]; Then if I use this line of code: [NSURLConnection connectionWithRequest:request delegate:self]; It works and uploads the file, but if I use: NSURLSessionUploadTask *task = [self.session uploadTaskWithRequest:request fromFile:[NSURL fileURLWithPath:filePath]]; [task resume]; I get the forbidden error..!? Has anyone tried uploading to S3 with this and hit similar issues? I wonder if it is to do with the way the session pauses and resumes uploads, or it is doing something funny to the request..? One possible solution would be to upload the file to an interim server that I control and have that forward it to S3 when it is complete... but this is clearly not an ideal solution! Any help is much appreciated!! Thanks!

    Read the article

  • vsftpd not allowing uploads. 550 response

    - by Josh
    I've set vsftpd up on a centos box. I keep trying to upload files but I keep getting "550 Failed to change directory" and "550 Could not get file size." Here's my vsftpd.conf # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # Allow anonymous FTP? (Beware - allowed by default if you comment this out). anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES anon_other_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # The target log file can be vsftpd_log_file or xferlog_file. # This depends on setting xferlog_std_format parameter xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # The name of log file when xferlog_enable=YES and xferlog_std_format=YES # WARNING - changing this filename affects /etc/logrotate.d/vsftpd.log #xferlog_file=/var/log/xferlog # # Switches between logging into vsftpd_log_file and xferlog_file files. # NO writes to vsftpd_log_file, YES to xferlog_file xferlog_std_format=NO # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd/banned_emails # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd/chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # When "listen" directive is enabled, vsftpd runs in standalone mode and # listens on IPv4 sockets. This directive cannot be used in conjunction # with the listen_ipv6 directive. listen=YES # This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6 # sockets, you must run two copies of vsftpd whith two configuration files. # Make sure, that one of the listen options is commented !! #listen_ipv6=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES log_ftp_protocol=YES banner_file=/etc/vsftpd/issue local_root=/var/www guest_enable=YES guest_username=ftpusr ftp_username=nobody

    Read the article

  • Windows and Linux applications to show cumulative uploads/downloads for each app, at a glance

    - by jontyc
    I've read few quite a few other threads on SU, but they have been focused on instantaneous/average bandwidths (B/sec) rather than cumulative download/upload totals for a period. Either that or they don't drill down to application level. Resource Monitor in Windows 7 only shows bandwidth. I've just been trying NetLimiter and whereas it can show total uploaded/downloaded, it's a case of having one stats window open per application, as opposed to a table showing all applications at once. Looking for applications for both Windows and Linux (Ubuntu), but they don't need to be the same.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Wordpress Directory Permission to allow uploads, plugin folders, etc

    - by user1015958
    I have a wordpress pre-made site which were developed on my localmachine, and i uploaded it too a vps running on debian6, using nginx, mysql, php. Following this guide: 1) Create an unprivilaged user, this could be say 'karl' or whatever, and make them belong to the www-data group. So that if I were to login as karl and create a web root in say /home/karl/www/ , all the files will be owned by karl:www-data 2) Set up nginx as the user www-data in nginx.conf 3) Set up PHP-FPM to run as www-data 4) Place your files in /home/karl/www/[domain name maybe]/public_html/, upload as 'karl' so you don't have to chown everything again. when i type ls -l inside public_html/ it shows that all the files inside are owned by karl:karl. But the public_html directory is owned by karl:www-data. I chmod 0755 the folder wp-content but i still get the error: ERROR: Path ../wp-content/connection_images does not seem to be writeable. I know i shouldn't set it too 777 due to security reason, how should i set it too proper permission? and what should i set also to allow my users to upload,write posts,edit articles? Sorry for my english by the way.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >