Search Results

Search found 21072 results on 843 pages for 'thin client'.

Page 159/843 | < Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >

  • VSFTPD - FTP over TLS - Upload stops after exactly 82k?

    - by Redsandro
    I installed a VSFTP daemon on a CentOS server, using a RSA certificate for logging in using explicit TLS. Now, I cannot upload more than 82k. With files under that limit, there is no problem. The FTP works like a charm. But as soon as a file reaches 82k with FileZilla (81,952 bytes to be exact), the transfer will stop, and the FTP client hangs until time out is reached. FTP client console: 15:10:21 Command: STOR jquery-1.7.2.min.js 15:10:21 Response: 150 Ok to send data. 15:11:21 Error: Connection timed out 15:11:21 Error: File transfer failed after transferring 82 KB in 60 seconds /var/log/vsftpd.log FTP command: Client "x.x.x.x", "STOR jquery-1.7.2.min.js" FTP response: Client "x.x.x.x", "150 Ok to send data." OK UPLOAD: Client "x.x.x.x", "jquery-1.7.2.min.js", 81952 bytes, 1.32Kbyte/sec FTP response: Client "x.x.x.x", "226 File receive OK." // NOT okay, file is bigger // No mention of error here I cannot find relevant info about this problem, apart from a possible problem with trans_chunk_size (not mentioned in default config), but I tried different sizes and it has no impact on the problem. trans_chunk_size=4096 trans_chunk_size=8192 trans_chunk_size=9999 Ofcourse, after every configuration change, I restarted the server: /etc/init.d/vsftpd restart What else can cause this? It's not the latest version, but it's the latest update within the repositories that has been deemed fit for enterprise usage: Package info: $ yum info vsftpd Loaded plugins: fastestmirror Installed Packages Name : vsftpd Arch : x86_64 Version : 2.0.5 Release : 24.el5_8.1 Size : 286 k Repo : installed Summary : vsftpd - Very Secure Ftp Daemon URL : http://vsftpd.beasts.org/ License : GPL Description: vsftpd is a Very Secure FTP daemon. It was written completely from scratch.

    Read the article

  • Synergy: Cannot send media keys from Linux to Mac

    - by CraftyThumber
    I have a Linux Synergy server (Si-Linux) serving just one Mac client (Macbook Pro UK) (SiBook-Pro.local). On my Linux server I am using a USB Apple keyboard with the exact layout of the laptops keyboard (the compact UK aluminium keyboard). I would like to send the media keys to the Mac client at all times and I have attempted the following in my synergy.conf: keystroke(AudioPlay) = keystroke(AudioPlay,SiBook-Pro.local) This did not seem to work so I ran both the server and client as foreground processes and with debugging enabled and observed the following: Server Log: DEBUG1: activate actions DEBUG1: hotkey: keyDown(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyDown id=57523 mask=0x0000 button=0x0000 DEBUG1: send key down to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 DEBUG1: deactivate actions DEBUG1: hotkey: keyUp(AudioPlay,SiBook-Pro.local) DEBUG1: onKeyUp id=57523 mask=0x0000 button=0x0000 DEBUG1: send key up to "SiBook-Pro.local" id=57523, mask=0x0000, button=0x0000 Client Log: DEBUG1: recv key down id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: mapKey e0b3 (57523) with mask 0000, start state: 0000 DEBUG1: key e0b3 is not on keyboard DEBUG1: recv key up id=0x0000e0b3, mask=0x0000, button=0x0000 DEBUG1: recv enter, 1279,386 5 2000 As you can see, the client claims the key received is not on keyboard. I don't understand since it is the same key as is on the Macbook's keyboard. I tried to reverse the client-server config to see if I could capture the key being sent if I pressed the Play button on the Macbook but the key doesn't seem to even make it to Synergy. Almost all keyboard presses get logged but the media keys seem to bypass the logs and just execute their function locally. E.g. I press play on the Macbook (with the Macbook as the server) and the key plays music on the Macbook and the key is not logged to the debug log.

    Read the article

  • LDAP SSL connect problem

    - by juergen
    I set up a test domain for my LDAP SSL tests and it is not working. I am using Windows Server 2008 R2 SP 1. I came so far: 1. i generated and installed my self signed certificate on the test domain controller 2. on the server i can log into ldap over SSL with the MS ldp.exe tool. 3. using ldp.exe on a client that is no in this domain the login fails with error 0x51 = "failed to connect". (i don't have a client computer that is in this domain right now) 4. I testet the certificate by using it in the IIS on the test server and I can reach the default page of the test server over SSL. (from the client that is not in the domain) 5. analysing the traffic between client and server I can see that the server is sending a certificate to the client. why isn't this working on my client computer?

    Read the article

  • SBS2011 Standard DNS suddenly not resolving some domains

    - by Matt
    Suddenly today I am unable to resolve common domains like serverfault.com, facebook.com; but other domains like google.com, cnn.com work fine. This is on a client machine (Win7 Pro) connected to an SBS2011 Standard domain. The only DNS server is the SBS2011 server. The same domains work fine on all client PCs I have tried, and the same ones do not work. Using nslookup, I get 'no such domain' errors for facebook.com, and the correct DNS entries for the ones that do work. When I add Google's Public DNS to my client PC as a backup (primary = local SBS server, secondary = 8.8.8.8), everything works fine for my client PC, but querying from the SBS server directly or from other client PCs are broken (so I don't believe it's a firewall issue). My main question is how can I see what servers the SBS2011 server queries if it doesn't know about a domain? There is nothing in our firewall logs that say it blocked any DNS-based packets, but I also wanted to query based on the IP/FQDN on the servers that the SBS server was likely to contact to find out about facebook.com for example. Update 23/05/2012: It appears DNS is working again this morning for the affected websites. Both the DC on its own and all client PCs can once again access the websites that were not loading last night, as well as the websites that were working. I haven't changed anything overnight, so it appears that there was some kind of temporary glitch, but I can't understand what would have caused it on the network.

    Read the article

  • Can I configure Thunderbird 3 to refresh the folder list for an Exchange IMAP account?

    - by Howiecamp
    Background: When used as an IMAP client against Gmail, Thunderbird 3 (may be the case in v2 also, not sure) will refresh it's list of folders (the folders correspond to Gmail labels) when you do "Download/Sync Now..." or restart the Thunderbird client. Any new folders (labels) created in Gmail will sync to the client and any folders moved/changed/deleted folders in Gmail will move/change/delete on the client as well. (Note: Thunderbird has the concept of "subscribing" to IMAP folders (assumingly allowing you to determine which folders you want, rather than bringing all of them down and dragging loads of data across the wire). When used against Gmail, Thunderbird appears to automatically subscribe to all folders (including when folders are newly created in Gmail), so this might be why the refresh is happening properly.) This behavior is what I want with Exchange. When using Thunderbird with Exchange (2007), the folder list doesn't refresh when folders are added/changed/deleted on the server and/or from a different mail client. When I look at the subscription options, some are checked and some are not (not sure why Thunderbird picked some and not others). And when I add new folders on the server and/or from another client, they never even appear in Thunderbird's list of folders, preventing me from subscribing to them.

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • Moving from single-site to multi-site Active Directory has broken OWA proxying

    - by messick
    Originally we had the following setup: OfficeExch01 has Mailbox Role and CAS Role OfficeExch01 is in the office. CoLoExch01 had just CAS Role. CoLoExch01 is internet facing and in a CoLo. Three AD domain controllers in the default site. Users could go to https://webmail.whatever.com/owa, get proxyed to OfficeExch01 and everything was great. Well, we recently setup a separate AD site and put a domain controller and the ColoExch01 server in the new site. I also made that remote DC be a Global Catalog. Now, users get the following error: Outlook Web Access is not available. If the problem continues, contact technical support for your organization and tell them the following: There is no Microsoft Exchange Client Access server that has the necessary configuration in the Active Directory site where the mailbox is stored. I also see event 41 errors in the logs: The Client Access server "https://webmail.xxxxxxx.com/owa" attempted to proxy Outlook Web Access traffic for mailbox "/o=XXXXX/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=xxxxxxk". This failed because no Client Access server with an Outlook Web Access virtual directory configured for Kerberos authentication could be found in the Active Directory site of the mailbox. The simplest way to configure an Outlook Web Access virtual directory for Kerberos authentication is to set it to use Integrated Windows authentication by using the Set-OwaVirtualDirectory cmdlet in the Exchange Management Shell, or by using the Exchange Management Console. If you already have a Client Access server deployed in the target Active Directory site with an Outlook Web Access virtual directory configured for Kerberos authentication, the proxying Client Access server may not be finding that target Client Access server because it does not have an internalUrl parameter configured. You can configure the internalUrl parameter for the Outlook Web Access virtual directory on the Client Access server in the target Active Directory site by using the Set-OwaVirtualDirectory cmdlet. Looking this up I see a lot talk about ExternalURL and InternalURL settings. However, everything worked great until we made the new AD site. I also made sure the internal CAS server's /owa virtual directory is set to use Integrated Authentication. Is there something I need to do to allow Exchange to see that I've made these AD changes?

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • VMware server 2.0 SYN/ACK repeating issues

    - by user65579
    VMWare Server 2.0.0 Build 122956 I am having some issues with connecting into a guest VM (Ubuntu linux 4.4.3-4 lucid) running under VMware 2.0 on a windows server host. All connections to and from the VM's work fine, except for FTP. I thought the issue was the FTP daemon at first but it has been ruled out that it is not the daemon or the server itself. When you try to connect to the FTP server from outside of the host OS it fails with a "421 Service not available" but when you try and connect from the local VM or from the host OS the connection goes through fine. I have ran many packet sniffs using wireshark/tcpdump from the VM, the host OS, and the client connecting, the most informative is the host OS. I have attached a PNG of the relavant packets that were captured. I viewed some other network traffic that was sniffed (WWW specifically) and it seems to do the same syn/ack repeating but the user doesnt see any issues. I have disabled the firewall and the issues persisits, I have tried with specific allow rules to ensure the data is allowed and no changes. It appears like VMware attempts to do the ICMP redirect and it works, but then it vmware repeats the packets sent so you get 3 syn/ack's for every one syn from the client. Also VMWare appears to be attempting to establish an FTP connection between the HOST OS and the GUEST OS, because I see the second SYN sent from the HOST OS to the GUEST to initiate a new connection, and it get the appropriate SYN/ACK followed by an ACK, but the client never sees any of this from its end. EG. syn from client syn/ack from host OS to client syn/ack from guest OS to client syn/ack from host OS to client The same thing happens when the connection reset is attempted, RST's start being sent and repeated, the server responds with a valid header to continue the FTP handshake but the RST acknowledgement is allready issued and things are closed. I am not 100% if this is a bug in VMware or possibly a VMNetwork missconfiguration. Does anyone have any thoughts on where exactly the issue could be, things to try to verify or rule out? I have linked to a picture of the relevant packets sniffed from the host OS. http://img18.imageshack.us/img18/7789/vmwareftpconnection.jpg

    Read the article

  • How to use AD/GPO/Print Services to "push out" a new printer driver to replace a broken one? How did my server get a broken driver?

    - by Zac B
    Context: We have an AD/GPO-managed corporate network with a little over a hundred PCs running Windows 7 x64, and a few managed printers. Our Server2008R2 primary domain controller is configured as a print server for them all. Problem: After a recent windows update and restart (no printer driver updates were included) on the DC, a particular shared printer (Lexmark T650) has begin exhibiting some strange behavior. First, it prints a preceding and following blank page for almost every document, on jobs submitted by about half of client machines (no separator page is configured on the server or any of the clients I've seen). Second, whenever someone tries to access "Printing Preferences" on any client, they recieve the following error message (this happens everywhere, 100% of the time, and didn't happen before the update on the DC): Once they click "OK", the prefs screen appears (with no separator page selected) and everything seems fine. I'm not even sure if these two issues are related, but everyone seems affected by one or both of those issues. What I've Tried: I've been hesitant to un-deploy the problem printer, or remove it via GPO, as it's pretty heavily used. I've tried updating (via MS update and our internal WSUS server) client machines and the DC. No printer driver updates have appeared, and no number of updates or restarts on the server or the client seems to have achieved anything other than my boss getting grumpy that I'm bouncing the domain controller so often. I've tried deleting the drivers on the server, and re-installing them from the original source that has worked for the past year...no change. I've tried selecting "New Driver" for one of the shared printers on a client machine, running as domain admin, and pushed the latest driver found by MSupdate back up to the DC. This changed the version number of the driver recorded in the print server manager, but caused no change--on the client I pushed from, or on any other. The error still appears. Question: Why the heck is this happening? Obviously, I got a bad driver from somewhere, but how do I get rid of it? I don't know of any "roll back drivers" functionality for centrally managed print drivers like Windows offers for other devices. How would I a) get this issue resolved on a client, and b) push the fix to the other members of the domain?

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • fork() in perl on windows

    - by Darioush
    I'm using fork() on PERL in windows (activeperl) for a basic socket server, but apparently there are problems (it won't accept connections after a few times), is there any workaround? while($client = $bind->accept()) { $client->autoflush(); if(fork()){ $client->close(); } else { $bind->close(); new_client($client); exit(); } } is the portion of the relevant code.

    Read the article

  • Certificates Validations Issues

    - by user298331
    Hi All, i am facing some issues related certificates.i need some help to resolve these issues. Requirements : security mode="TransportWithMessageCredential" binding binding name="basicHttpEndpointBinding" certificateValidationMode ="ChainTrust" revocationMode="Online" Certificates : Service Cerificates : Transportlevel : XXXX.cer my cerificate name is my system DNS name and it is having root node i.e RootTrnCA.cer this is used to enable https.but am not validationg transport level certificates. Message Level : services.ca.iim (VXXXX.Cer--Act.Mac.Ca--services.ca.iim ) Client Cerificates : Transportlevel : ZZZZ.cer my cerificate name is my system DNS name and it is having root node i.e RootTrnCA.cer ignoring transport certificate errors through coading..... Message Level : client.ca.iim (VXXXX.Cer--Act.Mac.Ca--client.ca.iim ) Issues : 1) Response message is not contain Service certificate Signature in Soap header.so i am not able to validate Server certificate details in Client code. 2)if i use the transport with message credential and Chaintrust.i am getting error : The revocation function was unable to check revocation because the revocation server was offline.) so please very the below service and cleint config and correct me if i am wrong. Service config : Client config : i am attaching certificate through coading : objProxy.ChannelFactory.Credentials.ClientCertificate.SetCertificate(System.Security.Cryptography.X509Certificates. StoreLocation.LocalMachine, System.Security.Cryptography.X509Certificates. StoreName.My, X509FindType.FindBySubjectName, "client.ca.iim"); <binding name="XXXXXServiceHost.Http" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportWithMessageCredential"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="Certificate" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://XXXXXX/XXXServiceHost/MemberSvc.svc/soap11" binding="basicHttpBinding" bindingConfiguration="XXXServiceHost.Http" contract="ServiceReference1.IMemberIBA" name="XXXServiceHost.Http" /> </client> </system.serviceModel>Please Verify both and Help me how to resolve above two issues . Thanks Babu

    Read the article

  • How to use SSL Web Services in a Rails application

    - by Mathieu
    Hi, I having a hard time to consume this webservice https://www.arello.com/webservice/verify.cfc?wsdl in my rails application. I successfully generated the ruby files with the wsdl2ruby.rb but when un run the generated script I get the following error: at depth 0 - 20: unable to get local issuer certificate OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed I also tried to connect via this script but same issue require 'http-access2' client = HTTPAccess2::Client.new() client.ssl_config.set_trust_ca('/arello.cert') puts client.get('https://www.arello.com/webservice/verify.cfc?wsdl').content Any ideas? Thanks

    Read the article

  • MSBuild command-line error - Silverlight 4 SDK is not installed

    - by Ned
    My MSBuild command line is as follows: msbuild e:\code\myProject.csproj /p:Configuration=Debug /p:OutputPath=bin/Debug /p:Platform=x86 /p:PlatformTarget=x86 The project builds fine on my development machine in VS2010 but not with the command above. I am running Win 7 64 - Bit. I'm getting an error that says I don't have the Silverlight 4 SDK installed but I do. I"ve read some posts that you have to set the Platform=x86 but to no avail. Here is the error message in full: Microsoft (R) Build Engine Version 4.0.30319.1 [Microsoft .NET Framework, Version 4.0.30319.1] Copyright (C) Microsoft Corporation 2007. All rights reserved. Build started 6/8/2010 4:03:38 PM. Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010 .web.csproj" on node 1 (default targets). GenerateTargetFrameworkMonikerAttribute: Skipping target "GenerateTargetFrameworkMonikerAttribute" because all output fi les are up-to-date with respect to the input files. CoreCompile: Skipping target "CoreCompile" because all output files are up-to-date with resp ect to the input files. CopyFilesToOutputDirectory: Copying file from "obj\Debug\MyProject.Web.dll" to "bin\Debug\MyProject.Web .dll". MyProject2010.web - E:\code\dashboards\MyProject2010\MyProject2010.Web \bin\Debug\MyProject.Web.dll Copying file from "obj\Debug\MyProject.Web.pdb" to "bin\Debug\MyProject.Web .pdb". Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010 .web.csproj" (1) is building "E:\code\dashboards\MyProject2010\MyProject20 10.Client\MyProject2010.Client.csproj" (2) on node 1 (GetXapOutputFile target( s)). C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Silverlight .Common.targets(104,9): error : The Silverlight 4 SDK is not installed. [E:\cod e\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Client.cspr oj] Done Building Project "E:\code\dashboards\MyProject2010\MyProject2010.Clie nt\MyProject2010.Client.csproj" (GetXapOutputFile target(s)) -- FAILED. Done Building Project "E:\code\dashboards\MyProject2010\MyProject2010.Web\ MyProject2010.web.csproj" (default targets) -- FAILED. Build FAILED. "E:\code\dashboards\MyProject2010\MyProject2010.Web\MyProject2010.web.csp roj" (default target) (1) - "E:\code\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Clie nt.csproj" (GetXapOutputFile target) (2) - (GetFrameworkPaths target) - C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Silverlig ht.Common.targets(104,9): error : The Silverlight 4 SDK is not installed. [E:\c ode\dashboards\MyProject2010\MyProject2010.Client\MyProject2010.Client.cs proj] 0 Warning(s) 1 Error(s) Time Elapsed 00:00:00.39 I appreciate anyone's help. Thanks.

    Read the article

  • how to get ip address from sock structure in c ?

    - by REALFREE
    I'm writing simple server/client and trying to get client ip address and save it on server side to decide which client should get into critical section. I googled it several times but couldn't find proper way to get ip address from sock structure. I believe this is a way to get ip from sock struct after server accept request from client more specifically in c after server execute csock = accept(ssock, (struct sockaddr *)&client_addr, &clen) Thanks

    Read the article

  • SolrException: Internal Server Error

    - by Priya
    Hi All, I am working on Solr in my application. I am using apache-solr-solrj-1.4.0.jar When I try to call add(SolrInputDocument doc) of CommonsHttpSolrServer I am getting following exception: org.apache.solr.common.SolrException: Internal Server Error Internal Server Error at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:424) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:243) at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:64) Can anyone please help me to resolve this problem? following are attributes in solrconfig.xml: native false true

    Read the article

  • Making a "Fuzzy border" in Silverlight 3.0

    - by skb
    I would like to have fuzzy looking border around my Canvas control. Basically, I am creating a Print Preview screen, and I want it to look almost exactly like the one in Word 2010. In this, there is a thin gray line, a thin orange line and then a fuzzy gradient around the outside of the page. Check it out and you will see what I mean. Can anyone tell me how I can do this with my Canvas control?

    Read the article

  • How to validate smtp credentials before sending mail in C# ?

    - by Manish Gupta
    I need to validate the username and password set in SmtpClient object before sending mail. Here is the code sample: SmtpClient client=new SmtpClient(host); client.Credentials=new NetworkdCredentials(username,password); client.UseDefaultCredentials=false; //Here I need to verify the credentials(i.e. username and password) client.Send(mail); Thanks in advance.....

    Read the article

  • How to convert Word to images with win32com in python?

    - by SpawnCxy
    Hi all, I have googled an example for converting Word to Html. import win32com from win32com.client import Dispatch, constants w = win32com.client.Dispatch('Word.Application') w = win32com.client.DispatchEx('Word.Application') '''skip some code here''' wc = win32com.client.constants w.ActiveDocument.SaveAs( FileName = filenameout, FileFormat = wc.wdFormatHTML ) I tried looking for something like wc.wdFormatPNG as wc.wdFormatHTML in the example but failed.And I wonder does the attribute exist?Or any other better solutions?Suggestions would be appreciated.

    Read the article

  • VS2010 Load Testing - Restricting ports

    - by marks-s
    As per the trouble shooting guide for VS2010 Load Testing (http://social.msdn.microsoft.com/Forums/en/vststest/thread/df043823-ffcf-46a4-9e47-1c4b8854ca13), I'm trying to restrict the range of ports used for client-controller communication. HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\VisualStudio\10.0\EnterpriseTools\QualityTools\ListenPortRange\PortRangeStart HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\VisualStudio\10.0\EnterpriseTools\QualityTools\ListenPortRange\PortRangeEnd I've set these keys on the client as described but according to netstat the client is still listening on random ports. The controller is attempting to communicate on the same random ports as the client. Anyone experienced the same?

    Read the article

< Previous Page | 155 156 157 158 159 160 161 162 163 164 165 166  | Next Page >