Search Results

Search found 20029 results on 802 pages for 'directory permissions'.

Page 144/802 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • apt-mirror does not mirror the i18n directory

    - by Fred
    I need to setup a local Ubuntu mirror so the whole network doesn't need to hit remote servers in order to update and install new packages. Following a brief tutorial found here, I managed to get a server up and running that correctly mirrors packages from the main and restricted categories. However, when I call apt-get update on a client, I get a couple of errors such as : Ign http://192.168.1.18 karmic/main Translation-fr Ign http://192.168.1.18 karmic/restricted Translation-fr Checking back on the server, I see that apt-mirror only took the binary-amd64 directory of the mirror, and didn't take i18n that would provide Translation-fr. The manpage for apt-mirror doesn't say anything about i18n, and Google is of no help either. How do I properly mirror i18n? My current mirror.list file is as follows : ############# config ################## # # set base_path /var/spool/apt-mirror # # if you change the base path you must create the directories below with write privileges # # set mirror_path $base_path/mirror # set skel_path $base_path/skel # set var_path $base_path/var # set cleanscript $var_path/clean.sh # set defaultarch <running host architecture> # set postmirror_script $var_path/postmirror.sh set run_postmirror 0 set nthreads 20 set _tilde 0 # ############# end config ############## deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic main restricted deb http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive karmic-updates main restricted clean http://mirror.cc.columbia.edu/pub/linux/ubuntu/archive

    Read the article

  • Samba Does Not List Files In Shared Directory

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it?

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Remove directory from URL IIS 7.5

    - by xalx
    I've tried to find a solution to this and found some guides out there but none seem to work. I have the following URL - http://www.mysite.com/aboutus.html However there are some other sites which link to my old hosted site and point to http://www.mysite.com/nw/aboutus.html. My issue here is trying to remove the 'nw' directory from the URL's. I have setup the following URL Rewrite in IIS but it does not seem to do anything, <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="Redirect all to root folder" enabled="true" stopProcessing="true"> <match url="^nw$|^/nw/(.*)$" /> <conditions> </conditions> <action type="Redirect" url="nw/{R:1}" /> </rule> <rule name="RewriteToFile"> <match url="^(?!nw/)(.*)" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" /> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" /> </conditions> <action type="Rewrite" url="/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Any insight would be appreciated.

    Read the article

  • Data recovery from corrupt Ubuntu partition/directory (question about a previous answer)

    - by JoshMaurice
    I have an Ubuntu installation that won't boot anymore. I asked my previous question about it here: http://superuser.com/questions/15916/ubuntu-chkdsk-equivalent Bolotov replied: As I see from your previous question you can boot Windows so you could use dskprobe from Windows XP Service Pack 2 Support Tools to make sure that fs type is correct ... but it's already correct fs type 7 is NTFS. Message "The type of the filesystem is RAW. CHKDSK is not available for RAW drives." means that windows can't determine fs type for some reason. As we see fs type is correct. To run Chkdsk on your Windows partition you can install Windows Recovery Console, boot in recovery console and check your disk. After checking the disk you will gain access to you c:\ubuntu\disks. I think you can mount your linux partition (which is in file) as usual loop-back device: mount -o loop [path to your linux-loopback-partition] But you should mount windows patrition first. So now I'd like to know: Within the recovery console I will be issuing the commands "chkdsk -r" and then "mount -o loop [path to windows partition]" and then "mount -o loop c:\ubuntu\disks", correct? I do have a ("corrupt and unreadable") c:\ubuntu\disks directory so that appears to be the correct path to the linux partition; do you know the path to the windows partition? would that be just "c:\"?

    Read the article

  • nginx giving 404 when accessing php from alias directory

    - by code90
    I am trying to migrate from apache to nginx. The php sites that I am hosting need to access a shared library which turns out to be an alias directory. Below is the configuration I came up with. html files work fine, but php files giving 404. I have read through and tried most (if not all) of the answers to the similar questions with no any success. Any hint on what might be causing the issue in my case? location /wtlib/ { alias /var/www/shared/wtlib_4/; index index.php; } location ~ /wtlib/.*\.php$ { alias /var/www/shared/wtlib_4/; try_files $uri =404; if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass 127.0.0.1:9013; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/shared/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Thanks all ! Update: Following seems to be working fine: location /wtlib/ { alias /usr/share/php/wtlib_4/; location ~* .*\.php$ { try_files $uri @php_wtlib; } location ~* \.(html|htm|js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location @php_wtlib { if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; }

    Read the article

  • file system that allow to specify different RAID level per directory and change it afterward

    - by Adam Ryczkowski
    I have 5 hard drives, where I want to keep my data. Some of my files are more important, and some of them are less. So some of them I wish to put on RAID-6, and for some it RAID-5 is sufficient. It is difficult to predict at the moment of creation of the arrays how much space of each type to declare. What I would do if I didn't hear about zfs, is partition the hard drives into identical 100GB partitions, and as my needs grow, assemble those partitions into md devices using linux-raid. Then, I'd combine those devices using lvm into logical volumes where I'd put my data. So when I'd need more space of e.g. RAID-6, I'd take 100GB partition from each hard drive and assemble them into another RAID-6 md device and would use it as physical storage for the logical volume group dedicated for RAID-6 data. Then I could grow the file system on this logical volume. On top of RAID-6 and RAID-5 Volume Groups (managed by lvm) would reside completely independent file systems, which I'd later merge with multiple mount --bind into a single directory structure that would reflect the logical structure of data rather that of the storage. But now, when I heard about the ZFS with all the performance, data-healing and compression capabilities I cannot stop thinking if it can help me. If so, what do you think would be the best setup?

    Read the article

  • Simple Distributed Disconnected way to sync a directory

    - by Rory
    I want to start regularly backup my home directory on my ubuntu laptop, machine X. Suppose I have access to 2 different remote (linux) servers that I can backup to, machines A & B. Machine X will be the master, and should be synced to A and B. I could just regularly run rsync from X to A and then from X to B. That's all I need. However I'm curious if there's a more bandwidth effecient, and hence faster way to do it. Assuming X is going to be on residential style broadband lines, and since I don't want to soak up the bandwidth, I would limit the transfer from X. A and B will be on all the time, however X, will not be, so I'd also like to reduce the amount of time that X is transfering, potentially allowing A and B to spend more time transfering. Also, X won't be connected all the time. What's the best way to do this? rsync from X to A, then from A to B? Timing that right could be troublesome. I don't want to keep old files around, so if I was to rsync, then the --del option would be used. Could that mean something might get tranfered from A to B, then deleted from B, then transfered from A to B again? That's suboptimal. I know there are fancy distributed filesystems like gluster, but I think that's overkill in this case, and might not fit with the disconnected nature.

    Read the article

  • Urgent: how to deny read access to a ExecCGI directory

    - by Malvolio
    First, I can't believe that that isn't the default behavior. Second, yikes! I don't know how long my code's been hanging out there, with all sort of cool secret stuff, just waiting for some hacker who knows Apache better than I do. EDIT (and apology) Well, this is sort of embarrassing. Here's what happened: We had some Python scripts available to the web, at /aux/file.py, which were not surprisingly at /var/www/http/aux . Separately, we were running an app server and Apache proxies through at /servlets/. A contractor had constructed the WAR file by bundling up all the generated files including the Python files (which are in a directory also called aux, not surprisingly), so if you typed in /servlets/aux/file.py, the web-server would ask the app-server for it and the app-server would just supply the file. It was the latter URL that this morning I happened to type in by accident and lo, the source appeared. Until I realized the shear unlikelihood of what I had done, the situation was rating about 8.3 on the sphincter scale. After a tense half-hour or so I realized that it had nothing to do with the CGI (and that serving files that were also executable would be not only foolish but also impossible), and was able to address the real problems. So -- sorry, everybody. Let the scorn-fest commence.

    Read the article

  • nginx root directory not forwarding correctly

    - by user66700
    The server files are store in /var/www/ Everything was working perfectly, then I've been getting the following errors 2011/01/28 17:20:05 [error] 15415#0: *1117703 "/var/www/https:/secure.domain.com/index.html" is not found (2: No such file or directory), client: 119.110.28.211, server: secure.domain.com, request: "HEAD /https://secure.domain.com/ HTTP/1.1", host: "secure.domain.com" Heres my config: server { server_name secure.domain.com; listen 443; listen [::]:443 default ipv6only=on; gzip on; gzip_comp_level 1; gzip_types text/plain text/html text/css application/x-javascript text/xml text/javascript; error_log logs/ssl.error.log; gzip_static on; gzip_http_version 1.1; gzip_proxied any; gzip_disable "msie6"; gzip_vary on; ssl on; ssl_ciphers RC4:ALL:-LOW:-EXPORT:!ADH:!MD5; keepalive_timeout 0; ssl_certificate /root/server.pem; ssl_certificate_key /root/ssl.key; location / { root /var/www; index index.html index.htm index.php; } }

    Read the article

  • JAAS and WebLogic 10.3: Granting specific codebase permissions to a JAR bundled within an EAR

    - by Jason
    Here's my scenario: I have a JAR within the APP-INF/lib of my EAR, to be deployed within WebLogic 10g Release 3 against which I wish to grant specific permissions. e.g., grant codebase "file:/c:/somedir/my.jar" { permission java.net.SocketPermission "*:-","accept,connect,listen, resolve"; permission java.net.SocketPermission "localhost:-","accept,connect,listen,resolve"; permission java.net.SocketPermission "127.0.0.1:-","accept,connect,listen,resolve"; permission java.net.SocketPermission "230.0.0.1:-","accept,connect,listen,resolve"; permission java.util.PropertyPermission "*", "read,write"; permission java.lang.RuntimePermission "*"; permission java.io.FilePermission "<<ALL FILES>>","read,write,delete"; permission javax.security.auth.AuthPermission "*"; permission java.security.SecurityPermission "*"; }; Questions: Where is the best place to define this grant - in the java.policy of the JRE, WL server's weblogic.policy, or within a XML packaged within the EAR How do I define the codebase URL to the JAR? The examples I have seen have an explicit reference to the JAR on the file system, however I am deploying the JAR packaged up within an EAR. Thanks!

    Read the article

  • Thin permissions in etc folder (Ubuntu)

    - by Apollo
    I am working on a RoR server setup that uses Thin and Nginx. It works fine, but only if I manually add the folder /etc/thin and set the permissions to 777 in order to use the command below: thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production If I don't set it to 777, I get this error: me@UbuntuRails:/etc$ thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in initialize': Permission denied - /etc/thin/testapp.yml (Errno::EACCES) from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:inopen' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in config' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:187:inrun_command' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:152:in run!' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/bin/thin:6:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in load' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in eval' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in' I don't like to set this folder to a 777, sounds like a rubbish workaround. I run everything from an admin user account, not root. RVM runs from my admin user and gem only works in my admin as well. If I sudo that action, nothing happens because my root doesn't "know" thin. Which is the correct way to handle this? Thanks!

    Read the article

  • How to set permissions or alter a git commit process when using local repositories

    - by Tony
    I have a server that contains a central git repository and one of my co-worker's development environment. My co-worker's repository's origin is the central git repository and he pushes there when he has some code to share. Likewise, I develop locally and push to the central git repository when I have some code to share, so my repository's origin is also the central git repository. The issue is that I have the central git repository under a "git" user's home directory. So when I push I am actually SSH'ing into the the server as the "git" user. To be even more clear, my config has these lines: $ more .git/config [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = [email protected]:fsg [branch "master"] remote = origin merge = refs/heads/master When I push, git handles this SSH + push seamlessly with I am guessing some sort of git shell. The issue is that when my coworker pushes, he is logged in as himself for a user and gets a bunch of crazy permission errors. Is there a typical way to solve this problem without opening up git's directories to a group? I think this will be problematic when I push and therefore overwrite the the repository and those permissions are reset. Thanks!

    Read the article

  • Google Script / Spreadsheet -- Shared permissions with Installed Trigger onEdit

    - by user1761852
    Using an installed trigger inside spreadsheet to call onUpdateBilling(). Purpose of this script is on edit, based on content of column "billed" (i.e. "d") will highlight the entire column the predetermined color. Page running script is shared with collaborators and they have been given edit access. My expectation at this point is the script should be run with owner permissions. My shared users are unable to run the script with the given error "You don't have permission for this action." Reached my limited knowledge and googlefu for this workaround. Any help to allow operation to my collaborators is appreciated. Script: function onUpdateBilling(e) { var statusCol = 16; // replace with the column index of Status column A=1,B=2,etc var sheetName = "Temple Log"; // replace with actual name of sheet containing Status var cell = SpreadsheetApp.getActiveSheet().getActiveCell(); var sheet = cell.getSheet(); if(cell.getColumnIndex() != statusCol || sheet.getName() != sheetName) return; var row = cell.getRowIndex(); var status = cell.getValue(); // change colors to meet your needs var color; if (status == "D" || status == "d") { color = "red";} else if (status >= 1) { color = "yellow";} else if (status == "X" || status == "x") { color = "black";} else if (status == "") { color = "white";} else { color = "white"; } sheet.getRange(row + ":" + row ).setBackgroundColor(color); }

    Read the article

  • Still don't understand file upload-folder permissions

    - by Camran
    I have checked out articles and tutorials. I don't know what to do about the security of my picture upload-folder. It is pictures for classifieds which should be uploaded to the folder. This is what I want: Anybody may upload images to the folder. The images will be moved to another folder, by another php-code later on (automatic). Only I may manually remove them, as well as another php file on the server which automatically empties the folder after x-days. What should I do here? The images are uploaded via a php-upload script. This script checks to see if the extension of the file is actually a valid image-file. When I try this: chmod 755 images the images wont be uploaded. But like this it works: chmod 777 images But 777 is a security risk right? Please give me detailed information... The Q is, what to do to solve this problem, not info about what permissions there are etc etc... Thanks If you need more info let me know...

    Read the article

  • declarative_authorization permissions on roles

    - by William
    Hey all, I'm trying to add authorization to a rather large app that already exists, but I have to obfuscate the details a bit. Here's the background: In our app we have a number or roles that are hierarchical, roughly like this: BasicUser -> SuperUser -> Admin -> SuperAdmin For authorization each User model instance has an attribute 'role' which corresponds to the above. We have a RESTful controller "Users" that is namespaced under Backoffice. So in short it's Backoffice::UsersController. class Backoffice::UsersController < ApplicationController filter_access_to :all #... RESTful actions + some others end So here's the problem: We want users to be able to give permissions for users to edit users but ONLY if they have a 'smaller' role than they currently have. I've created the following in authorization_rules.rb authorization do role :basic_user do has_permission_on :backoffice_users, :to => :index end role :super_user do includes :basic_user has_permission_on :backoffice_users, :to => :edit do if_attribute :role => is_in { %w(basic_user) } end end role :admin do includes :super_user end role :super_admin do includes :admin end end And unfortunately that's as far as I got, the rule doesn't seem to get applied. If I comment the rule out, nobody can edit If I leave the rule in you can edit everybody I've also tried a couple of variations on the if_attribute: if_attribute :role => is { 'basic_user' } if_attribute :role => 'basic_user' and they get the same effect. Does anybody have any suggestions?

    Read the article

  • [NAnt] About "nant::get-base-directory()"

    - by Nam Gi VU
    As in http://nant.sourceforge.net/release/latest/help/functions/nant.get-base-directory.html, they explaint the meaning of this function is: The base directory of the appdomain in which NAnt is running. I don't know what does appdomain mean! Someone please explain it for me. Thank you.

    Read the article

  • Visual studio 2008 autorecover directory location change

    - by Spooky2010
    howdy using vs2008 C# Does anyone know how change the directory location that auto recover files are saved to in visual studio sp1. The specific directory it should be on my C# drive in my documents is always empty even when the feature is turned on. There where some changes to our network env and it seems my auto recover files are now being mapped to network drive and i need to change it back. I cant find anything on how to do this on net searches if anyone can help please. thanks for any help

    Read the article

  • Quickest way to find the oldest file in a directory using Delphi

    - by Pieter van Wyk
    HI We have a large number of remote computers that capture video onto disk drives. Each camera has it's own unique directory and there can be up to 16 directories on any one disk. I'm trying to locate the oldest video file on the disk but using FindFirst/FindNext to compare the File Creation DateTime takes forever. Does anybody know of a more efficient way of finding the oldest file in a directory? We remotely connect to the pc's from a central HO location. Regards, Pieter

    Read the article

  • Beginner - C# iteration through directory to produce a file list

    - by dassouki
    The end goal is to have some form of a data structure that stores a hierarchal structure of a directory to be stored in a txt file. I'm using the following code and so far, and I'm struggling with combining dirs, subdirs, and files. /// <summary> /// code based on http://msdn.microsoft.com/en-us/library/bb513869.aspx /// </summary> /// <param name="strFolder"></param> public static void TraverseTree ( string strFolder ) { // Data structure to hold names of subfolders to be // examined for files. Stack<string> dirs = new Stack<string>( 20 ); if ( !System.IO.Directory.Exists( strFolder ) ) { throw new ArgumentException(); } dirs.Push( strFolder ); while ( dirs.Count > 0 ) { string currentDir = dirs.Pop(); string[] subDirs; try { subDirs = System.IO.Directory.GetDirectories( currentDir ); } catch ( UnauthorizedAccessException e ) { MessageBox.Show( "Error: " + e.Message ); continue; } catch ( System.IO.DirectoryNotFoundException e ) { MessageBox.Show( "Error: " + e.Message ); continue; } string[] files = null; try { files = System.IO.Directory.GetFiles( currentDir ); } catch ( UnauthorizedAccessException e ) { MessageBox.Show( "Error: " + e.Message ); continue; } catch ( System.IO.DirectoryNotFoundException e ) { MessageBox.Show( "Error: " + e.Message ); continue; } // Perform the required action on each file here. // Modify this block to perform your required task. /* foreach ( string file in files ) { try { // Perform whatever action is required in your scenario. System.IO.FileInfo fi = new System.IO.FileInfo( file ); Console.WriteLine( "{0}: {1}, {2}", fi.Name, fi.Length, fi.CreationTime ); } catch ( System.IO.FileNotFoundException e ) { // If file was deleted by a separate application // or thread since the call to TraverseTree() // then just continue. MessageBox.Show( "Error: " + e.Message ); continue; } } */ // Push the subdirectories onto the stack for traversal. // This could also be done before handing the files. foreach ( string str in subDirs ) dirs.Push( str ); foreach ( string str in files ) MessageBox.Show( str ); }

    Read the article

  • Deploying war internals in webapp directory in Maven

    - by soumik
    I've a base project which has a dependency on a custom war to be downloaded from a remote repository. I'm able to download and run the war, however the custom war has some html & js files which i need to get extracted in my base project's webapp directory. How do I specify a location for the custom.war to explode at a specified directory?

    Read the article

  • Override the Local module directory for cvs using Hudson

    - by Roberto
    Hi guys, I'm using Hudson and I need to change the checkout directory for cvs. Instead of checkout/update the project under the workspace dir, I'd like to specify a dir (as you can do for svn, changing the Local module directory conf) that will match the cvs tree structure. Eg. under cvs dir1/dir2/project on my box workspace/dir1/dir2/project is that possible with cvs and Hudson? Maybe there's a way to override the cvs call? Thanks! Roberto

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >