Search Results

Search found 25629 results on 1026 pages for 'site maintenance'.

Page 255/1026 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • File transfer problems through VPN when Cisco IPS is enabled

    - by Richard West
    We have a Cisco ASA 5510 firewall with the IPS module installed. We have a customer that we must connect to via VPN to their network to exchange files via FTP. We use the Cisco VPN client (version 5.0.01.0600) on our local workstations, which are behind the firewall and subject to the IPS. The VPN client is successful in connecting to the remote site. However when we start the FTP file transfer we are able to upload only 150K to 200K of data, then everything stops. A minute later the VPN session is dropped. I think I have isolated this to an IPS issue by temporarily disabling the Service Policy on the ASA for the IPS with the following command: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 inactive After this command was issued I then established the VPN to the remote site and was successful in transferring the entire file. While still connected to the VPN and FTP session I issued the command to enable the IPS: access-list IPS line 1 extended permit ip 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 The file transfer was tried again and was once again successful so I closed the FTP session and reopened it, while keeping the same VPN session open. This file transfer was also successful. This told me that nothing with the FTP programs was being filtered or causing the problem. Furthermore, we use FTP to exchange files with many sites everyday without issue. I then disconnected the original VPN session, which was established when the access-list was inactive, and reconnected the VPN session, now with the access-list active. After starting the FTP transfer the file stopped after 150K. To me this seems like the IPS is blocking, or somehow interfering with the initial VPN setup to the remote site. This only started happening last week after the latest IPS signature updates were applied (sig version 407.0). Our previous sig version was 95 days old becuase the system was not auto updating itself. Any ideas on what could be causing this problem?

    Read the article

  • How do I install newer python on CentOS with minimal effort?

    - by Sorin Sbarnea
    I would like to install Python 2.6 and mod_python on CentOS 5 (x64). The system is delivered with old python 2.4 and I want the new one with minimal maintenance effort (compiling and having a different installation seams to be suboptimal solution). Is there a solution for this, other than starting to recompile lots of packages? If not should I switch to Ubuntu? Please remember that I'm talking about x64 - I found a repository on net with updated packages but it is not x64.

    Read the article

  • Re: How can Django/WSGI and PHP share / on Apache?

    - by Bogdan
    in response to: How can Django/WSGI and PHP share / on Apache? Hello, could you please post the complete config file from /sites-available I am having a problem seems like rewrite engine redirects all requests to django, so static and php files are not served and instead i see the django 404 page. If I get rid of rewrite rule then static files and php works. here is my apache config file from /sites-available <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/www/django <Directory /> Options +FollowSymLinks ExecCGI Indexes AllowOverride None DirectoryIndex index.php AddHandler wsgi-script .wsgi </Directory> RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ /mysite.wsgi/$1 [QSA,PT,L] ~ and my .wsgi file: import site site.addsitedir('/home/user/.virtualenvs/url.com/lib/python2.6/site-packages') import os, sys path = '/home/www/django' if path not in sys.path: sys.path.append(path) os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings' sys.path.append(path + '/mysite') import django.core.handlers.wsgi _application = django.core.handlers.wsgi.WSGIHandler() import posixpath def application(environ, start_response): # Wrapper to set SCRIPT_NAME to actual mount point. environ['SCRIPT_NAME'] = posixpath.dirname(environ['SCRIPT_NAME']) if environ['SCRIPT_NAME'] == '/': environ['SCRIPT_NAME'] = '' return _application(environ, start_response) the document root directory on disk (/home/www/django) contains php files, images, and the mysite.wsgi file.. thanks for your help

    Read the article

  • Rails + Nginx + Unicorn multiple apps

    - by Mikhail Nikalyukin
    I get the server where is currently installed two apps and i need to add another one, here is my configs. nginx.conf user www-data www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Disable unknown domains ## server { listen 80 default; server_name _; return 444; } ## # Virtual Host Configs ## include /home/ruby/apps/*/shared/config/nginx.conf; } unicorn.rb deploy_to = "/home/ruby/apps/staging.domain.com" rails_root = "#{deploy_to}/current" pid_file = "#{deploy_to}/shared/pids/unicorn.pid" socket_file= "#{deploy_to}/shared/sockets/.sock" log_file = "#{rails_root}/log/unicorn.log" err_log = "#{rails_root}/log/unicorn_error.log" old_pid = pid_file + '.oldbin' timeout 30 worker_processes 10 # ????? ???? ? ??????????? ?? ????????, ???????? ??????? ? ??????? ???? ???? listen socket_file, :backlog => 1024 pid pid_file stderr_path err_log stdout_path log_file preload_app true GC.copy_on_write_friendly = true if GC.respond_to?(:copy_on_write_friendly=) before_exec do |server| ENV["BUNDLE_GEMFILE"] = "#{rails_root}/Gemfile" end before_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH end end end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection end Also im added capistrano to the project deploy.rb # encoding: utf-8 require 'capistrano/ext/multistage' require 'rvm/capistrano' require 'bundler/capistrano' set :stages, %w(staging production) set :default_stage, "staging" default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:forward_agent] = true set :scm, "git" set :user, "ruby" set :runner, "ruby" set :use_sudo, false set :deploy_via, :remote_cache set :rvm_ruby_string, '1.9.2' # Create uploads directory and link task :configure, :roles => :app do run "cp #{shared_path}/config/database.yml #{release_path}/config/database.yml" # run "ln -s #{shared_path}/db/sphinx #{release_path}/db/sphinx" # run "ln -s #{shared_path}/config/unicorn.rb #{release_path}/config/unicorn.rb" end namespace :deploy do task :restart do run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -s USR2 `cat #{unicorn_pid}`; else cd #{deploy_to}/current && bundle exec unicorn_rails -c #{unicorn_conf} -E #{rails_env} -D; fi" end task :start do run "cd #{deploy_to}/current && bundle exec unicorn_rails -c #{unicorn_conf} -E #{rails_env} -D" end task :stop do run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -QUIT `cat #{unicorn_pid}`; fi" end end before 'deploy:finalize_update', 'configure' after "deploy:update", "deploy:migrate", "deploy:cleanup" require './config/boot' nginx.conf in app shared path upstream staging_whotracker { server unix:/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock; } server { listen 209.105.242.45; server_name beta.whotracker.com; rewrite ^/(.*) http://www.beta.whotracker.com/$1 permanent; } server { listen 209.105.242.45; server_name www.beta.hotracker.com; root /home/ruby/apps/staging.whotracker.com/current/public; location ~ ^/sitemaps/ { root /home/ruby/apps/staging.whotracker.com/current/system; if (!-f $request_filename) { break; } if (-f $request_filename) { expires -1; break; } } # cache static files :P location ~ ^/(images|javascripts|stylesheets)/ { root /home/ruby/apps/staging.whotracker.com/current/public; if ($query_string ~* "^[0-9a-zA-Z]{40}$") { expires max; break; } if (!-f $request_filename) { break; } } if ( -f /home/ruby/apps/staging.whotracker.com/shared/offline ) { return 503; } location /blog { index index.php index.html index.htm; try_files $uri $uri/ /blog/index.php?q=$uri; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/run/php-fastcgi/php-fastcgi.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location / { proxy_set_header HTTP_REFERER $http_referer; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; # If the file exists as a static file serve it directly without # running all the other rewite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { proxy_pass http://staging_whotracker; break; } } error_page 502 =503 @maintenance; error_page 500 504 /500.html; error_page 503 @maintenance; location @maintenance { rewrite ^(.*)$ /503.html break; } } unicorn.log executing ["/home/ruby/apps/staging.whotracker.com/shared/bundle/ruby/1.9.1/bin/unicorn_rails", "-c", "/home/ruby/apps/staging.whotracker.com/current/config/unicorn.rb", "-E", "staging", "-D", {5=>#<Kgio::UNIXServer:/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock>}] (in /home/ruby/apps/staging.whotracker.com/releases/20120517114413) I, [2012-05-17T06:43:48.111717 #14636] INFO -- : inherited addr=/home/ruby/apps/staging.whotracker.com/shared/sockets/.sock fd=5 I, [2012-05-17T06:43:48.111938 #14636] INFO -- : Refreshing Gem list worker=0 ready ... master process ready ... reaped #<Process::Status: pid 2590 exit 0> worker=6 ... master complete Deploy goes successfully, but when i try to access beta.whotracker.com or ip-address i get SERVER NOT FOUND error, while others app works great. Nothing shows up in error logs. Can you please point me where is my fault?

    Read the article

  • wget mirroring, subdomains and directories and cookies

    - by Jimmu
    Hi all, I have an account on a web page that is now "full" (ie I have used up all my allocated space) and I would like to make a mirror of that site. wget seems like the thing to use. The problem is that I would only like to mirror the sites the lie within this directory http://user.domain.com/room/2324343/transcript/ (and sub-directories). Whilst saving the correct stylesheets, javascripts and css etc which exist in different directories. There as also uploaded files that are linked to within the pages in the transcript directory (on different directories) that I would like to download/mirror (theses are in a variatey of formats .exe, .py, .png, .app (and many more)). There are also images that are on different severs that are on these pages. Also I would like it if the links (which are sometimes relative , sometimes absoulute (but to internal things), sometimes external ) worked correctly so that if they link to things that have been downloaded(mirrored) they work fine (without internet connection), but if they link to things that are external or havent been mirrored they link to the external site. Basically so they work as expected. Another problem is that you have to log in to acess the site. Can wget be used to acomplish this or is there a better way? either way how do I achive this? (I have asked this question at stackoverflow.com/questions/2190115/wget-mirroring-subdomains-and-directories-and-cookies but it was recommended that I try asking it here)

    Read the article

  • VMware databursts to vCenter

    - by Erwin Blonk
    [edit Nov.16th 2009: Thank you for the responses. I'm no longer on this project so the problem is out of my hands. Be sure I will return with other problems :-) ] I have a strange occurence of regular databursts over my WAN links from 2 sites to a site with vCenter. What happens is that a vCenter server is managing local ESX 3.5 servers and 2 from 2 sites across a WAN link. Each server sends an approximately 3MB worth of TLS data (less than 10% of the time it varies to higher or lower) every 15 minutes (with a margin of 2 minutes). So far, I've not been able to single out a process that causes it. I looked through all applications on each site. So far, it seems to originate from one server on each site. Although it may be coincidence and therefore not relevant, I found that one server, with very few exceptions, does a burst at 00:00 on the hour. The other 3 during the hour are a bit off the 15 minute mark but back at the top of the hour, you can sync your watch on it. The other server follows 5 minutes after that with no such precision. But, as said, it never differs more than 2 minutes. Servers are ESX 3.5, vCenter is 2.5.

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • process and memory issue on linux server

    - by zapping
    Need some assistance in analyzing apache and php process running on linux server. Its a 8-core intel processor with 4GB ram. When the website on it runs the top displays like this. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23459 username1 16 0 151m 27m 8388 S 11.3 0.7 0:11.71 php5 23730 username1 16 0 151m 28m 8388 S 11.3 0.7 0:03.87 php5 23458 username1 16 0 151m 28m 8388 S 3.0 0.7 0:19.20 php5 16202 mysql 15 0 459m 38m 4624 S 0.7 1.0 62:33.81 mysqld 24141 nobody 15 0 311m 5832 2304 S 0.3 0.1 0:00.03 httpd Why does the command say php5 when the website is accessed. Both apache and php was preconfigured so not sure whats done there. Tried setting up the same site and db on a different server but on it the process shows httpd always and not php5. The site uses mysql db. The problem is server load seems to go till about 5.x when the website was access by about 16users. When the free -m command was given the output shows total used free shared buffers cached Mem: 3941 3727 213 0 236 2734 -/+ buffers/cache: 756 3184 Swap: 4095 0 4095 Lots of memory seems to be in cache and free memory is less. Even when the website is not accessed that is leaving it very much idle for about 2days the free memory showed just 190. When the site is accessed the free memory seems to be go till 90mb then it increases to about 150mb. It always seems to remain just about 200mb. Is it somehow related to the server load showing 5.x. Will adding some more RAM resolve the load issue?

    Read the article

  • Apache 2.2 Present rss http 410 pages as application/rss+xml content type

    - by Mark Bakker
    I have a problem sending http-410 for very old rss feeds. Functional this can happen in one Very old rss feeds where content is not updated anymore / subject could not move to another feed Migration from 3th party site to our site where the rss feed is not longer functional supported I tried several things in my site config see below; <VirtualHost *:80> DocumentRoot /opt/tomcat/webapps/ROOT/ ErrorDocument 500 /error/static/error-500.html ErrorDocument 503 /error/static/error-500.html ErrorDocument 404 /error/static/rss/error-404.html ErrorDocument 410 /error/static/rss/error-410.html # When error pages need to be served by apache, # exclude the files to serve as below (in comment) SetEnvIf Request_URI "/error/static/*" no-jk # force all files to be image/gif: <Location *.rss> #<Location *> #ForceType application/rss+xml </Location> #AddType application/rss+xml .rss #AddType application/rss+xml .xml #AddType application/rss+xml .html JkMount /* rss;use_server_errors=402 # JkMount /* rss RewriteEngine on JkMount /news.rss rss JkMount /documenten-en-publicaties.rss rss RewriteEngine on RewriteRule ^/news.rss$ - [NC,T=application/rss+xml,G,L] RewriteRule ^/documenten-en-publicaties.rss$ - [NC,G,L] # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ErrorLog "|/usr/bin/logger -s -p local3.err -t 'Apache'" CustomLog "|/usr/bin/logger -s -p local2.info -t 'Apache'" combined ServerSignature Off </VirtualHost> The desired end result should be on /news.rss and /documenten-en-publicaties.rss a 410 page with content in the error page with a content type 'application/rss+xml'

    Read the article

  • Add bookmarks to Delicious and Google Bookmarks at the same time

    - by BrianH
    I have used delicious.com (or back then, del.icio.us) to store my bookmarks for a long time now, and I love it. I was looking through some of my Google services, and realized they have a bookmarking service that integrates with your Google searches (I thought they had a bookmarking service before, but it went away? Maybe not). I like delicious just fine - I'm not interested in leaving. But I also like how my Google bookmarks are highlighted (and I'm guessing, brought to the top) in my search results so I can easily tell if I've bookmarked a site (kind of like the "promote up" feature). I can't even count the number of times I search for a site only to find I've been there months or years ago. If sites I've bookmarked in the past are highlighted in my search results, it makes it easier to pick which search result to go to. My question is around bookmarking tools: Is there a bookmarklet or Firefox addon that will let me save a bookmark to multiple services at the same time, in this case, Google and Delicious? Or maybe a service to sync my delicious bookmarks to Google bookmarks on a regular basis? I have used the Delicious addon since the beginning - it would just be nice to add a bookmark to multiple services with 1 addon. For that matter, it would be nice to add Evernote into the mix - click 1 button to save the page to Evernote, and bookmark the page in Google and delicious. EDIT on 7/30/2009 - Summary: A proposed solution is to use the Delicious addon and the GMarks addon to keep the 2 services in sync. I was not able to get the 2 addons to keep everything in sync, so it was also suggest to use the Google Toolbar with the Delicious addon to keep everything in sync. I personally have reservations with letting Google know about every single site I visit, I believe this solution will work, so I am accepting it as the answer. I still wish there was a solution that would let you post a bookmark/page to multiple services at the same time (delicious, google, evernote, digg, diigo, etc.). Thanks!

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Director

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Directory on the same server?

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by Gary
    I have a VPS in the USA running Ubuntu. I want to setup something similar to http://www.usvideo.org Basically, USVIDEO is a DNS service that allows Canadians to access American content like Hulu, Netflix, NBC, and etc (restricted by geographical IP). Here is how I think USVideo does it: Clients (PS3, XBOX, PC) specifies the DNS server(s) as specified on USVIDEO.org's website. If the DNS request is a video/audio site such as Netflix or Pandora, forward the request to a proxy. Otherwise, for all other requests, forward it to a different DNS server. If the specific video/audio URL is requested, return the address of the proxy server, which in turn relays traffic to the destination video/audio domain via the U.S. gateway so that it appears that the access is coming from a U.S. IP address. Once the DNS request has passed the U.S. IP address check, their proxy server steps out of the loop and lets the video streaming site contact you directly to start the video stream. This trick relies on the way that the video streaming sites check the country of your IP address once up front, but don't actually check the country of the destination IP address while the video is streaming. What is elegant about this solution is that a VPN Tunnel is not required to bypass geographical IP checks from certain websites. All that is required on the client side is to specify the DNS server (the VPS). If a certain site is geographically locked, just forward the traffic to a proxy, and that's it. These sites can be specified in the DNS entries, or perhaps in the proxy service to redirect the DNS request to its own proxy. I believe what I need to setup something similar is Squid Proxy, IPTables, and DNS. What I need help is how to exactly approach this? Would Squid Proxy be setup as a transparent proxy?

    Read the article

  • Could this server log mean my server is being used as a proxy?

    - by So Over It
    I came across the following entry in my access.log: 58.218.199.147 - - [05/Jun/2012:12:56:04 +1000] "GET http://proxyproxys.com/ HTTP/1.1" 200 183 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" Normally when I see a full URL entry in my access.log I assume it is log spam with people trying to get me to access their site. These entries are normally followed with a 404 response. The above entry is followed with a 200 'success' response! Doing some searching it would seem that this can occur when someone is trying to use your server as a proxy. This disturbed me more - especially because the URL in question has the word proxy in it. Going to the site 'proxyproxys.com' (using hidemyass.com to protect my own identity), the site returns what appears to be some sort of 'proxy judge' ---------------------------------------- HTTP_ACCEPT=text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 HTTP_ACCEPT_LANGUAGE=en-US,en;q=0.8 HTTP_USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_4) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.53 Safari/536.5 HTTP_CONNECTION=close REMOTE_PORT=56355 REMOTE_HOST=74.63.112.142 REMOTE_ADDR=74.63.112.142 ---------------------------------------- CS_ProxyJudge Result=HIGH_ANONYMITY ---------------------------------------- Question: 1) does the 200 success mean that someone has been able to successfully use my server as a proxy? 2) are there other means of confirming if my server is being used as a proxy 3) can you refer me to documentation to help 'close up' my security gap if there is one. Thanks.

    Read the article

  • MMC crashes on Windows Server 2008 x64 - Exchange console, event viewer

    - by David M Williams
    Help! I don't know what happened; this server has been very reliable but suddenly began having problems with a particular .NET 2.0 web site simply hanging - it wouldn't load at all. However, another ASP.NET site was still fine. Reinstalling the site didn't fix it, nor did deleting and re-creating the application within IIS. Trying the event viewer was met with a horrifying "Microsoft Management Console has stopped working". Some Googling led me to believe the .NET framework was the problem. I found a tool called the .NET cleanup tool - http://blogs.msdn.com/astebner/pages/8904493.aspx - which cleaned out .NET entirely. I reinstalled .NET 1.1 and 3.5 (which installed 2.0 and 3.0 as well). Using the .NET verification tool - http://blogs.msdn.com/astebner/pages/8999004.aspx - I believe these have all installed ok. However, my server is in worse shape now. The Exchange 2010 Management Console crashes with an MMC error and now my other (previously reliable) .NET web app now hangs on loading too. I thought I should use Computer Management to remove and re-add the application and web server roles but sure enough, MMC crashes. If anyone can help I will be extremely grateful. Thank you !

    Read the article

  • Problem configuring Apache/Wordpress on subdomain

    - by friism
    I have two servers (one LAMP, one Windows) and one website with an associated blog. I'm running the main site on the Windows server, and the blog on the LAMP server, using Wordpress. The main site is accessed at http://folketsting.dk (it's in Danish -- sorry), the blog is accessed at http://blog.folketsting.dk (this link is bad, read on). The main site works fine. The blog works, except for the frontpage. Example of working post: http://blog.folketsting.dk/2009/10/09/ftlive/. The frontpage of the blog (http://blog.folketsting.dk) shows html from http://folketsting.dk however (except for the css and javascript). In fact, any other URL than the frontpage "works", and gets served by Wordpress e.g. http://blog.folketsting.dk/foo. I cannot -- for the life of me -- understand how the LAMP server running http://blog.folketsting.dk manages to serve up content generated by the Windows server running http://folketsting.dk. Looking at the response headers at http://blog.folketsting.dk, it's evident that the content originates from Apache, not IIS. I'm pretty sure it's not a DNS-issue, since the problem is evident even when accessing the raw IP, eg. http://130.226.142.141/ vs. http://130.226.142.141/foo. I'm thinking it's a bad config in Apache... any clues?

    Read the article

  • OpenSSL response 404 issue on centOS 6

    - by dsp_099
    I followed this tutorial (though it's for 5.2, I figured I'd be alright). The changes I had to make that seemed to have worked: Rename ca.csr to ca.cslr (that's the one the command generated) List it in the ssl.conf as ca.cslr instead of ca.csr I have the following in the httpd.conf <VirtualHost *:80> DocumentRoot /etc/test ServerName site.com </VirtualHost> <VirtualHost *:433> SSLEngine on SSLCertificateFile /etc/pki/tls/certs/ca.crt SSLCertificateKeyFile /etc/pki/tls/private/ca.key <Directory /etc/test> AllowOverride All </Directory> DocumentRoot /etc/test ServerName cryptokings.com </VirtualHost> /test contains a folder inside of it, accessible via http://site.com/test/foo, however attempting to access it via https://site.com/test/foo results in warning that the certificate is untrusted (self-signed, no biggie) a 404 error. Chrome's complains about the certificate are the following: The identity of this website has not been verified. • Server's certificate does not match the URL. • Server's certificate is not trusted. I think those warnings are a side-effect of a self-signed certificate - or is the first one something that needs to be addressed? I seem to be able fetch the root page via https just fine though, it shows a standard CentOS setup page. (That said, I haven't added a VirtualHost entry for it so I suppose that makes sense) I think I've made a mistake somewhere during the setup as I'm not too familiar with the process. During setup, I was prompted for a type of password that would be required when apache restarts but running service httpd restart does not seem to prompt me for one. Any help would be appreciated.

    Read the article

  • How do i tell if my drivers are up to date on Acer?

    - by joe
    Hoping some kind souls can help me out ? I got a blue screen the other day after trying to load sandboxie. So its obviously conflicting with something. I checked if my drivers were up to date on my acer aspire one AOD270 on this intel based site; http://www.drivermanager.com/en/down...tel&Logo=intel Its showing i have 2 drivers that need updating ; Intel NM10 Express chipset and the Realtek PCIE Cardreader. I have no idea whether to do the update via the Intel Driver update site or the Acer drivers download page? I then ran Bluescreenview and on the dump file its showing ; ''caused by driver'' igdkmd32.sys ''file description'' Intel (R) WDDM Kernel mode driver ''product name''Intel Graphics Accelerator Drivers for Windows 7(R) I bought the laptop here in SE Asia about a year ago. The ''HOT!! NEW download tool'' on the acer drivers site (below) doesnt seem to work and the info about removing and installing drivers is limited. Not sure what to trust on non acer/manufacturer sites. http://support.acer.com/us/en/produc...1&modelId=4040 I've located the igdkmd32.sys file inside the INTEL GRAPHICS MEDIA ACCELERATOR 3600 SERIES 8.14.8.1064. When i click on ''update driver'' in control panel it searches and says its up to date. In windows maintenance it says this intel had a problem, but no solution. For all i know my drivers could be up to date and its something else. Can anybody advise a dummy step by step the process i should follow ? I've never done this before. eg do i delete the old driver first and then download the new one.how much of a problem i could cause by downloading this type of thing wrongly? As yet i havent downloaded any drivers. I've asked on other forums but no luck as yet. Thanks for any help!

    Read the article

  • managing a high traffic media sharing website

    - by Jordan Westerman
    i'm in the process of developing a website that i predict will generate a lot of traffic. the site will be similar to many other sites offering free media streaming: mp3's. we are going to start with a pretty minimal amount of media to share, but the basic idea is that artists will set up a profile page with music they have made available for consumers to visit the page and listen to the music. we are starting with just a handful of artists, but i think that this project will generate more and more artist pages. eventually i'd like to set it up so consumers can create personalized playlists. how can i best prepare server space and bandwidth capabilities? i have a small team of web designers and programmers working on the site, as i am pretty illiterate when it comes to site management. as the ring leader of this organization, i am more or less looking for financial requirements and monthly burn rate estimates. i don't have a ton of capitol to start with, putting together a business plan, but i am seeking investments. i have a game plan to grow fast enough to be successful, and slow enough to manage the financial growth requirements. any questions i may have failed to ask myself? is it realistic to start this project on a shared server, and upgrade? any financial advice you think i can use? i really appreciate any advice given, as this is my first business venture. thank you all in advance. Jordan Westerman D.B.A. Badfish Productions, LLC

    Read the article

  • Googlebot repeatedly looks for files that aren't on my server

    - by John at CashCommons
    I'm hosting a site for a volunteer organization. I've moved the site to WordPress, but it wasn't always that way. I suspect at one point it was hacked badly. My Apache error log file has grown to 122 kB in just the past 18 hours. The large majority of the errors logged are of this form -- it's repeated hundreds of times today alone in my log files: [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/calendar.php [Mon Nov 12 18:29:27 2012] [error] [client xx.xxx.xx.xxx] File does not exist: /home/*******/public_html/*******.org/404.shtml (I verified that xx.xxx.xx.xxx was a Google server.) I suspect there was a security hole somewhere before, likely in calendar.php, that was exploited. The files don't exist anymore, but there may be many backlinks that exist that reference here, hence why googlebot is so interested in crawling them. How do I fix this gracefully? I still would like Google to index the site. I just want to tell it somehow not to look for these files anymore.

    Read the article

  • GoDaddy SSL on Shared Hosting

    - by Jon
    So I'm very new to using SSL certificates and I have been trying to install one on a site for a client. He is using shared hosting for multiple domains through GoDaddy, and the site we're working on is not the primary domain. He purchased a UCC certificate for multiple domains and I installed it on the shared hosting account. My thought was that since the domains were under the same hosting account, then they would each be protected under the certificate. This was not the case...apparently. I checked both domains with an SSL checker and the primary domain checked out. The domain that we wanted the SSL on showed the following errors: None of the common names in the certificate match the name that was entered (www.CLIENTDOMAIN.com). You may receive an error when accessing this site in a web browser. I'm not sure how to fix this. It was just purchased yesterday, so if necessary, I guess I could un-install it or re-key it (???). Is there a way to just change the common name to www.CLIENTDOMAIN.com (the correct domain)?

    Read the article

  • Problem adding second domain controller to SBS 2008

    - by Quango
    Have an SBS 2008 server in one location, and want to add a backup domain controller at a different site. The two sites are linked by a VPN. New server is running Server 2008 R2, fully patched. At present it is a member server and the DNS is pointing at the SBS DNS. When I try running DCPROMO to connect the server, the wizard runs fine up to the point where the wizard is 'configuring Active Directory Domain Services' and 'examining forest': "The operation failed because: The wizard could not read operational attributes from the remote Active Directory Domain Controller SERVER.DOMAIN.LOCAL using LDAP. "The specified server cannot perform the requested operation." This error can occur if you have not been granted necessary permissions to read data in the directory. For more information, please see article 936241 in the Microsoft Knowledge Base (http://go.microsoft.com/fwlink/?LinkId=88420)." I was logged on as domain administrator. Interestingly the link is invalid and the KB article does not exist..! Settings: Configure this server as an additional Active Directory domain controller for the domain "[domain]". Site: [site] Additional Options: Read-only domain controller: "No" Global catalog: Yes DNS Server: Yes Update DNS Delegation: No Source domain controller: any writable domain controller Database folder: C:\Windows\NTDS Log file folder: C:\Windows\NTDS SYSVOL folder: C:\Windows\SYSVOL The DNS Server service will be configured on this computer. This computer will be configured to use this DNS server as its preferred DNS server.

    Read the article

  • Symantec Protection Suite Enterprise Edition

    - by rihatum
    We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) What Hardware would you recommend as a Server spec for the SQL server 16GB RAM, Dual XEON? d) What Hardware would you recommend as a Server spec for the MGMT Servers 16GB RAM each with DUAL xeon and sas disks? e) Also, how do you or would you recommend to protect these 4 servers (2 x SQL and 2 x MGMT Servers)? f) How would you recommend to store backups for these desktops? We do have a SAN and a NAS in our environment and we do have one spare DAS (Dell MD3000). If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks ! Rihatum

    Read the article

  • MOSS2007 tries to use ActiveDirectory when I have configured an alternative membership provider

    - by glenatron
    I've got a MOSS site that I am trying to configure using Forms authentication and absolutely any kind of membership provider whatsoever. Thus far ActiveDirectory has proved obstructively difficult so I've just whipped up a simple stub membership provider and put it in the GAC. It's a very basic and simple provider but it works fine with an ASP.Net site, I just can't make it work with Sharepoint. On Sharepoint I get the following error when I look for StubProvider:Bob ( or anything else for that matter) from the "Policy For Web Application" people picker: Error in searching user 'StubProvider:bob' : System.ComponentModel.Win32Exception: Unable to contact the global catalog server at Microsoft.SharePoint.Utilities.SPActiveDirectoryDomain.GetDirectorySearcher() at Microsoft.SharePoint.WebControls.PeopleEditor.SearchFromGC(SPActiveDirectoryDomain domain, String strFilter, String[] rgstrProp, Int32 nTimeout, Int32 nSizeLimit, SPUserCollection spUsers, ArrayList& rgResults) at Microsoft.SharePoint.Utilities.SPUserUtility.SearchAgainstAD(String input, SPActiveDirectoryDomain domainController, SPPrincipalType scopes, SPUserCollection usersContainer, Int32 maxCount, String customQuery, TimeSpan searchTimeout, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPActiveDirectoryPrincipalResolver.SearchPrincipals(String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount) at Microsoft.SharePoint.Utilities.SPUtility.SearchPrincipalFromResolvers(List`1 resolvers, String input, SPPrincipalType scopes, SPPrincipalSource sources, SPUserCollection usersContainer, Int32 maxCount, Boolean& reachMaxCount, Dictionary`2 usersDict). The Provider is named as Authentication Provider for the Site Collection in question. As far as I can tell this is because Sharepoint is still trying to access ActiveDirectory rather than talking to the provider I'm asking it to use. My Sharepoint Central Administration section includes this: <membership> <providers> <add name="StubProvider" type="StubMembershipProvider.Provider, StubMembershipProvider, Version=1.0.0.0, Culture=neutral, PublicKeyToken=5bd7e2498c3e1a03" /> </providers> </membership> And also: <PeoplePickerWildcards> <clear /> <add key="StubProvider" value="%" /> </PeoplePickerWildcards> Is there a clear reason why this would not be accessible from the PeoplePicker or why it is still trying to use ActiveDirectory? I've made sure I reset IIS and even restarted the server to see if either of those helped but they made no difference.

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >