Search Results

Search found 8284 results on 332 pages for 'trusted sites'.

Page 308/332 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • connect() failed (111: Connection refused) while connecting to upstream

    - by Burning the Codeigniter
    I'm experiencing 502 gateway errors when accessing a PHP file in a directory (http://domain.com/dev/index.php), the logs simply says this: 2011/09/30 23:47:54 [error] 31160#0: *35 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xx.xx.xx, server: domain.com, request: "GET /dev/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "domain.com" I've never experienced this before, how do I do a solution for this type of 502 gateway error? This is the nginx.conf: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #}

    Read the article

  • IIS 7.5 Error 1007

    - by darkdog
    I have a Windows 2008 R2 Virtual Server. I removed the "Default Web Site" using IIS Manager and now all my other sites now report an error saying they can't access the site (or something similar). I thought I could just remove the role, re-install IIS again and start with a clean slate. After I re-installed IIS it's now reporting the following error in the event log: The World Wide Web Publishing Service (WWW service) failed to register the URL prefix of "http://www.mash-guild.com:80:192.168.245.132/" for the website of "4". The required network connectivity may already be used. The site has been disabled. The data field contains the error number. This is the full version with the german error note (I'm from germany): Der WWW-Publishingdienst (WWW-Dienst) konnte das URL-Präfix "http://www.mash-guild.com:80:192.168.245.132/" für die Website "4" nicht registrieren. Die erforderliche Netzwerkverbindung wird möglicherweise bereits verwendet. Die Website wurde deaktiviert. Das Datenfeld enthält die Fehlernummer. Ereignis-XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC" Guid="{05448E22-93DE-4A7A-BBA5-92E27486A8BE}" EventSourceName="W3SVC" /> <EventID Qualifiers="49152">1007</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2011-03-26T16:14:56.000000000Z" /> <EventRecordID>1435</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>WIN-DCJ8SN0QI5J</Computer> <Security /> </System> <EventData> <Data Name="UrlPrefix">http://www.mash-guild.com:80:192.168.245.132/</Data> <Data Name="SiteID">4</Data> <Binary>B7000780</Binary> </EventData> </Event> I would like to know if there is a way to fix this or just get back to the IIS + Server settings I had just after I installed windows 2008 ?

    Read the article

  • csync2 ERROR: Connection to remote host failed

    - by Emil Salama
    I was unable to find any articles to answer this question, so my best bet was to post this here: Scenario We have 2x application servers in production hosting a PHP website and I would like some folders to be syncronized between the 2, the same was setup for the development environment with no issues, I've followed all instructions from the URL "http://www.cloudedify.com/synchronising-files-in-cloud-with-csync2/", I still seem to have the same result, firewall has been disabled on both boxes for troubeshooting purposes: Config Files: cysnc2.cfg nossl * *; group production { host server1; host server2; key /etc/csync-production-group.key; include /etc/httpd/sites-available; include /xxxxxx/public_html/files include /xxxxxxx/magento/media/catalog/product include /xxxxxxx/magento/media/brands exclude *.log; exclude /xxxx/public_html/file/cache; exclude /xxxxx/public_html/magento/var/cache; exclude /xxxx/public_html/logs; exclude /xxxxx/public_html/magento/var/log; backup-directory /data/sync-conflicts/; backup-generations 2; auto younger; } /etc/xinetd.d/csync2 csync2.cfg service csync2 { disable = no flags = REUSE socket_type = stream wait = no user = root group = root server = /usr/sbin/csync2 server_args = -i -D /data/sync-db/ port = 30865 type = UNLISTED log_type = FILE /data/logs/csync2/csync2-xinetd.log log_on_failure += USERID } I've made sure that the daemon is listening on both server on port 30865 and the keys matched on both servers I've run a tcpdump on each server, output as follows: 12:20:31.366771 IP server1.49919 server2.csync2: Flags [S], seq 445156159, win 14600, options [mss 1460,sackOK,TS val 794864936 ecr 0,nop,wscale 7], length 0 12:20:31.366810 IP server2.csync2 server1.49919: Flags [S.], seq 450593575, ack 445156160, win 14480, options [mss 1460,sackOK,TS val 794798911 ecr 794864936,nop,wscale 7], length 0 12:20:31.367101 IP server1.49919 server2.csync2: Flags [.], ack 1, win 115, options [nop,nop,TS val 794864937 ecr 794798911], length 0 12:20:31.367138 IP server1.49919 server2.csync2: Flags [P.], seq 1:9, ack 1, win 115, options [nop,nop,TS val 794864937 ecr 794798911], length 8 12:20:31.367147 IP server2.csync2 server1.49919: Flags [.], ack 9, win 114, options [nop,nop,TS val 794798912 ecr 794864937], length 0 12:20:31.368625 IP server2.csync2 server1.49919: Flags [R.], seq 1, ack 9, win 114, options [nop,nop,TS val 794798913 ecr 794864937], length 0 Is there anything else i'm missing or should be doing?

    Read the article

  • iis not listening on port 80

    - by user57467
    We have server 2003 and ISA 2004 with IIS 6 on same machnie. Everything worked well till yesterday, when we try to make some new rule in ISA..but this is a long story... Unfortunatelly something happend with our intranet site. Our site is on the port 80, but if we try to open on this client machines then we got and error page (which error page is our provider): 403-forbidden; Remote host not listening, the remote host is not prepared to acceppt the connection request. On the server i can open the site with port 80. If i change the port number in the iis and try to open the site with the port, then works well. I try to shut down IIS and start apache with a simple page. On the server works well but in clients the problem is the same, so i think this is not an IIS related problem. In the ISA we have a web pub rule, with port 80, no auth. Im pulling out my hair, please help. after uninstall and reinstall ISA, de sites work well, till i configure the upstream proxy in the conf/network/web chaining menu and then everything went same... So something wrong with the web-proxy / upstream function... (all my http request forward to my upstream proxy). That was the set long time ago...but a few day ago somehing went wrong... I think maybee our ISP spoiled something..tomorrow i try to figure out... But one more thing: I make a new rule before the default rule in the conf/network/web chaining menu. Every request go to the server not redirected.. Redirect to upstream server.... So if the request goes to our server (our site) then handled locally, and if not then go to upstream proxy and voilllaaa....i tougth... But unfortunatelly: our website work well, but internet work extreamly slowly..:( Maybee with single adapter i can made this? I have to handle all request locally or i have to send all to upstream? I cant filter it?

    Read the article

  • Uninstalling MySQL for MariaDB Replacement on cPanel

    - by ImmortalFirefly
    Well the first part of my day was spent researching how to remove MySQL to install MariaDB and the second part of my day was spent trying to reinstall MySQL cause something was messed up. So now I come to the masses for some help. I have a box with cPanel/WHM on it. CentOS 5.6 64 bit. I have upgraded (through WHM) MySQL to 5.5.24 and that was successful. After some research, the options I found were an intimidating Linux command with pipes greps and dashes, and another command yum remove mysql I tried that out and it appeared to remove mysql.....ish. I tried installing MariaDB from this instructions page and it started to do it's thing and then came the zillions of errors (here's a small sample): Transaction Check Error: file /etc/init.d/mysql from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysql_convert_table_format from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysql_install_db from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysql_secure_installation from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqlbug from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqld_multi from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqld_safe from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqldumpslow from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/bin/mysqlhotcopy from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/innochecksum.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/my_print_defaults.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/myisam_ftdump.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/myisamchk.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 file /usr/share/man/man1/myisamlog.1.gz from install of MariaDB-server-5.5.25-1.i386 conflicts with file from package MySQL-server-5.5.24-1.cp.1132.x86_64 So it appeared that MySQL wasn't removed correctly. I've read from different tutorials given on different sites that to install MariaDB, you had to uninstall/remove MySQL and there weren't any commands given on how to do this. Does anyone know how to "safely" remove MySQL on a WHM/cPanel server so that I can install MariaDB? Here's my repo file in case anyone needs to know... # MariaDB repository list - created 2012-07-10 17:09 UTC # http://downloads.mariadb.org/mariadb/repositories/ [mariadb] name = MariaDB baseurl = http://yum.mariadb.org/5.5/centos5-x86 gpgcheck=1

    Read the article

  • Debian Apache2 SSL Issues - Error code: ssl_error_rx_record_too_long

    - by Tone
    I'm setting up apache on Debian lenny and having issues with SSL. I've been through numberous tutorials and i had this working on Ubuntu server, but for the life of me can't get anywhere with Debian. Port 80 (http) works fine, but port 443 (https) gives me the following error (in firefox) - homeserver is my hostname and my dhcp assigned ip is 192.168.1.109. I have a feeling it's something with my config and not with the cert/key generation. An error occurred during a connection to homeserver. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) Anyone see any issues with the following config files? /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost *:443> ServerAdmin webmaster@localhost ServerName homeserver DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> SSLEngine on SSLCertificateFile /etc/ssl/certs/server.crt SSLCertificateKeyFile /etc/ssl/private/server.key SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 </VirtualHost> </IfModule> /etc/apache2/ports.conf NameVirtualHost *:80 Listen 80 Listen 443 #<IfModule mod_ssl.c> # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here #Listen 443 #</IfModule> /etc/hosts 127.0.0.1 localhost 127.0.0.1 homeserver #192.168.1.109 homeserver #tried this but it didn't work # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts

    Read the article

  • Apache taking up too much CPU

    - by andrewtweber
    I'm trying to manage a server on Amazon for a network of sites that receives about 100 million pageviews per month. Unfortunately, nobody out of my team of 5 developers has much server admin experience. Right now we have the MaxClients set to 1400. Currently our traffic is about average, and we have 1150 total Apache processes running, which use about 2% CPU each! Out of those 1150, 800 of them are currently sleeping, but still taking up CPU. I'm sure there are ways to optimize this. I have a few thoughts: It appears Apache is creating a new process for every single connection. Is this normal? Is there a way to more quickly kill the sleeping processes? Should we turn KeepAlive on? Each page loads about 15-20 medium-sized graphics and a lot of javascript/css. So, here's our Apache setup. We do plan on contracting a server admin asap, but I would really appreciate some advice until we can find someone. Timeout 25 KeepAlive Off MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 100 MinSpareServers 20 MaxSpareServers 50 ServerLimit 1400 MaxClients 1400 MaxRequestsPerChild 5000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 400 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Full top output: top - 23:44:36 up 1 day, 6:43, 4 users, load average: 379.14, 379.17, 377.22 Tasks: 1153 total, 379 running, 774 sleeping, 0 stopped, 0 zombie Cpu(s): 71.9%us, 26.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 1.9%si, 0.0%st Mem: 70343000k total, 23768448k used, 46574552k free, 527376k buffers Swap: 0k total, 0k used, 0k free, 10054596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1756 mysql 20 0 10.2g 1.8g 5256 S 19.8 2.7 904:41.13 mysqld 21515 apache 20 0 396m 18m 4512 R 2.1 0.0 0:34.42 httpd 21524 apache 20 0 396m 18m 4032 R 2.1 0.0 0:32.63 httpd 21544 apache 20 0 394m 16m 4084 R 2.1 0.0 0:36.38 httpd 21643 apache 20 0 396m 18m 4360 R 2.1 0.0 0:34.20 httpd 21817 apache 20 0 396m 17m 4064 R 2.1 0.0 0:38.22 httpd 22134 apache 20 0 395m 17m 4584 R 2.1 0.0 0:35.62 httpd 22211 apache 20 0 397m 18m 4104 R 2.1 0.0 0:29.91 httpd 22267 apache 20 0 396m 18m 4636 R 2.1 0.0 0:35.29 httpd 22334 apache 20 0 397m 18m 4096 R 2.1 0.0 0:34.86 httpd 22549 apache 20 0 395m 17m 4056 R 2.1 0.0 0:31.01 httpd 22612 apache 20 0 397m 19m 4152 R 2.1 0.0 0:34.34 httpd 22721 apache 20 0 396m 18m 4060 R 2.1 0.0 0:32.76 httpd 22932 apache 20 0 396m 17m 4020 R 2.1 0.0 0:37.34 httpd 22933 apache 20 0 396m 18m 4060 R 2.1 0.0 0:34.77 httpd 22949 apache 20 0 396m 18m 4060 R 2.1 0.0 0:34.61 httpd 22956 apache 20 0 402m 24m 4072 R 2.1 0.0 0:41.45 httpd

    Read the article

  • Why is user asked to choose their workgroup?

    - by Clinton Blackmore
    We running Mac OS X Server 10.5.8 with Mac OS X 10.5.8 clients. Students use network logins to, well, log in. I've been asked to deny internet access to a specific user. I was told that a good way to do it is to create a user workgroup called "No Internet Access" and manage settings there. (Specifically, I told parental controls to allow access to no sites, and blacklisted all the installed web browsers). Now, when the user authenticates to log in, they are greeted with this dialog: Workgroups for <username> Grade 7 Students No Internet Access It is unlikely that the student would willing choose "No Internet Access" to be their base group. Looking in Workgroup Manager at the student's record, it shows their primary group ID is the grade 7 group, and "No Internet Access" is listed as another group they belong to. I looked at the managed preferences for all the computers pertaining to logins. They are set to their defaults. Specifically, the computer groups' preference for Logins - Access has the defaults: [unchecked] Ignore workgroup nesting [checked] Combine available workgroup settings Based on my reading of Tips and Tricks for Mac Administrators, this should be correct, the user should not be asked which group they belong to, and settings from all applicable groups should be applied. How can I achieve that result? Edit: I've decided to add some additional information from the Tips and Tricks for Mac Management White Paper (via Apple in Education, via the author's site). On page 21, it says: With Leopard MCX, workgroup preference settings are combined by default into a single set of values. This means that instead of having to choose between the Math, Science, or Language Arts workgroups when logging in, a user can just authenticate and be taken directly to the desktop. All the settings for each of those workgroups are composited together, providing you with all the Dock items and a composite of all the other settings. On page 40, an example is given in which settings are combined from different 'domains', one computer group, two (user) workgroups, and one individual user's settings. [When johnd logs into a leopard client,] the items staged in the Dock from left to right are: computer group, first workgroup alphabetically, second workgroup, user. Items within the workgroup are staged alphabetically. Nowhere is there an indication that groups are nested; indeed, I can see no sensible (non-flat) heirarchy for groups like Math, Science, and Language Arts. I strongly believe that there is a way to apply settings from two unrelated user workgroups such that a user of OS X 10.5.x or newer does not need to choose their workgroup. This is what I seek to achieve.

    Read the article

  • Magento - Users unable to login from corporate networks with Bluecoat / F5 Load balancers

    - by user1330440
    Hoping someone has come across this issue before with Magento and corporate clients. We have two clients for our Magento site who both have their internal networks setup using bluecoat security devices and F5 load balancers. Some users within these networks are unable to login to Magento - Magento eventually is sending a 302 redirect to /index.php/ when users attempt to log in. Through our testing, the problem appears to be isolated to this setup - we can log into the accounts in question from anywhere outside of these networks without issue, and if the client tries to access the site without going through the F5 load balancer, they are able to log in successfully. Strangely enough, the issue only started occurring for the two sites the day after we introduced a system upgrade which added a new site to the Magento installation. The system upgrade should not have affected any standard login functionality, and as said, the problem does not appear to be with the users in question, but with where the users are accessing the site from. Initially we thought the issue might have something to do with communications between the client's networks and the network which the server was hosted on, so we've tried moving the server to different hosts, but this has not helped. I'm currently waiting for more info from the clients on exact devices / models used in their network setup. I will update this post if more information becomes avaliable. Magento version is enterprise edition of ver. 1.9.0.0 Does anyone know of any tucked away Magento settings that might be able to cause this kind of behavior? Experience with this kind of set-up and ideas for things to look at? All help and ideas for things to follow-up would be appreciated - as this is a current production issue for a large number of users. I will respond asap with any requests for additional information on the topic, but currently am not able to disclose any identifying information on the project in question, and/or the clients experiencing issues. Thanks in advance for any assistance offered :) Note: This question has also been posted on the Magento forums: http://www.magentocommerce.com/boards/viewthread/277917/ And also on Stack overflow (Moved here as a commenter thought this site may be better suited): http://stackoverflow.com/questions/10133978/magento-users-unable-to-login-from-corporate-networks-with-bluecoat-f5-load

    Read the article

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • The Cindy Shearin Group: New Scam Targets Renters in the Area

    - by user226089
    MONROE - Craigslist is a popular site when trying to find that perfect deal on a rental home or apartment. Experts warn some of these rental ads aren't what they seem. We decided to take a look. On our Craigslist search we found this house for rent. The problem is this home’s not for rent - it's for sale. “I think it’s a huge deal,” said Shane Wooten, the realtor for this home in Monroe. His properties have become the target of a common scam, aimed at taking your money. "It looks like they're trying to scam them out of their deposit and first months rent," adds Wooten. He says scammers copy and paste the sale ad's from legitimate realtor sites to Craigslist as rental ads. "I can usually tell when one hits craigslist because I’ll usually get 20 to 30 phone calls that day." They then pretend to be out of town on business or personal matters, and give only an email address as a point of contact. Usually they'll ask for money up front on a deal too good to miss. "You'll have a house that's supposed to rent for $950-1000 a month, and they'll have it renting for $600 a month,” says Wooten During our conversation, he shows us text messages from one scammer who says he'll mail the keys to this house if Wooten wires money for a deposit and first months rent. Jo Ann Deal of the Better Business Bureau says scammers are getting better at making themselves out to be realtors. "We’re really concerned for our real estate agents with this scam," says Deal. She says that realtors have to be more on top of their vacant homes in order to protect their businesses. So how can you tell if the house you want is really for rent? She says if the home owner lives out of the country, can't meet face to face or asks for a payment through a money wire it's probably a scam. “There are some catch-lines you watch for,” says Deal. “If the marketing is really good but there's no phone number, no physical address and they will communicate with you only by email and you can do it today, then it's probably a scam." You should always report fishy ad's to Craigslist or the BBB and never send money through a wire transfer.

    Read the article

  • Elasticsearch won't start anymore

    - by Oleander
    I restarted my elasticsearch instance 5 days ago and I haven't manage to start it since then. I get no output in the log file /var/log/elasticsearch/ nor does the elasticsearch binary print any information when running at using elasticsearch -f. I once manage to get this output. [2012-11-15 22:51:18,427][INFO ][node ] [Piper] {0.19.11}[29584]: initializing ... [2012-11-15 22:51:18,433][INFO ][plugins ] [Piper] loaded [], sites [] Running curl http://localhost:9200 resulted in curl: (7) couldn't connect to host. I've tried increasing the memory from 3gb to 10gb, but that didn't make any diffrence. Running /etc/init.d/elasticsearch start takes 30 seconds. ps aux | grep elasticsearch results in this output. /usr/local/share/elasticsearch/bin/service/exec/elasticsearch-linux-x86-64 /usr/local/share/elasticsearch/bin/service/elasticsearch.conf wrapper.syslog.ident=elasticsearch wrapper.pidfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.pid wrapper.name=elasticsearch wrapper.displayname=ElasticSearch wrapper.daemonize=TRUE wrapper.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.status wrapper.java.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.java.status wrapper.script.version=3.5.14 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Delasticsearch-service -Des.path.home=/usr/local/share/elasticsearch -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Xms1024m -Xmx1024m -Djava.library.path=/usr/local/share/elasticsearch/bin/service/lib -classpath /usr/local/share/elasticsearch/bin/service/lib/wrapper.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/jna-3.3.0.jar:/usr/local/share/elasticsearch/lib/log4j-1.2.17.jar:/usr/local/share/elasticsearch/lib/lucene-analyzers-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-core-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-highlighter-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-memory-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-queries-3.6.1.jar:/usr/local/share/elasticsearch/lib/snappy-java-1.0.4.1.jar:/usr/local/share/elasticsearch/lib/sigar/sigar-1.6.4.jar -Dwrapper.key=k7r81VpK3_Bb3N_5 -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=23888 -Dwrapper.version=3.5.14 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.elasticsearch.bootstrap.ElasticSearchF My current system: ElasticSearch Version: 0.19.11, JVM: 23.2-b09 Ubuntu 12.04 LTS I've tried re-install elasticsearch, removing old directories. Why can't I get it to start?

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • Moving windows-2003 hdd into virtual machine - with HDD shrink

    - by jm666
    Before you vote to close as exact duplicate, please read the full question. I was already read: Can I make a virtual machine out of a Windows XP physical machine? Disk2vhd,convert my PC to Hyper-V Virtual Machine Creating a Windows Virtual PC image from a Physical machine physical machine to virtual machine and place into VirtualBox BSOD trying to migrate Windows XP from a physical to a virtual machine http://en.wikipedia.org/wiki/Physical-to-Virtual and all other similiar questions here and several external sites too Unfortunately, don't find answer for my problem. I have an physical machine with 500GB HDD, on what is installed old Windows-2003 server with one server application. The application is like the windows itself, too old, no support for it today, haven't installation media and so on.. ;( On the HDD it is used only approx. 100MB (maybe less when will delete all unnecessary files). Want convert the the machine into the VirtualBox, and the VirtualBox should run on the same machine. Is possible to do this with the next steps? I can attach another HDD (via USB or internally) Boot an live Linux from CD, mount HDDs Run "something" on the Linux (the above wikipedia article have many pointer for the SW) for the conversion and store the image on the USB HDD - unfortunately, many of tools uses some specialty what exists in Windows-XP and above. No informations about Windows-2003 server, so what is an working solution for Windows-2003? try boot the virtual image with VirtualBox when it will run ok, remove the old installation, install Linux on the old 500GB hdd, copy the image and run.. The above should works (i hope), but the problems: i currently have only 320GB external USB hdd. (ofc, i can remove it from a box and enter it as internal HDD too) so, for the conversion I looking for the on the fly HDD shrink, so while moving the physical 500GB HDD need shrink it into smaller HDD - as i told above, only 100MB is used Exists something for this? (free) - or the only way is buying and larger 1TB hdd and using it for the conversion? Another question are: is anybody have real experience with windows-2003 conversion into VirtualBox? Looking for an answer from someone who really doing it and can figure out real pitfalls. (googling can do myself). exists here better approach for the solution?

    Read the article

  • Double MySQL Running on Mountain Lion

    - by Norris
    After I've done a full restart, my Apache PHP server doesn't connect to Local MySQL ( connecting via 127.0.0.1 because localhost for some reason fails always ). So I did this today: ? ~ mysqladmin shutdown -u root -p Enter password: ? ~ mysqladmin shutdown -u root -p Enter password: mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/tmp/mysql.sock' exists! Which basically means that I succeeded in shutting down mysql. But as soon as I did - Apache PHP successfully connect to MySQL and my local sites work without a hiccup until the next restart. Here are a few other details: (as you can tell - I've installed MySQL via brew) ? ~ sudo ps aux | grep mysql N 4774 0.0 0.0 2432768 620 s000 S+ 9:53AM 0:00.00 grep mysql N 4772 0.0 2.6 3030168 440688 ?? S 9:51AM 0:00.29 /usr/local/Cellar/mysql/5.6.13/bin/mysqld --basedir=/usr/local/Cellar/mysql/5.6.13 --datadir=/usr/local/var/mysql --plugin-dir=/usr/local/Cellar/mysql/5.6.13/lib/plugin --bind-address=127.0.0.1 --log-error=/usr/local/var/mysql/N.local.err --pid-file=/usr/local/var/mysql/N.local.pid N 4686 0.0 0.0 2433432 1000 ?? S 9:51AM 0:00.01 /bin/sh /usr/local/opt/mysql/bin/mysqld_safe --bind-address=127.0.0.1 N 4362 0.0 2.7 3120276 458728 ?? S 9:47AM 0:00.45 mysqld ? ~ lsof -i | grep mysql mysqld 4362 N 16u IPv6 0x76959e40691f9f93 0t0 TCP *:mysql (LISTEN) This is the weird thing: ? ~ killall -9 mysqld MySQL Is dead! Apache doesn't connect. Then, when I run: ? ~ sudo mysqladmin shutdown -u root -p Enter password: Apache is (again) able to successfully connect to MySQL. As far as I understand this means that I have two mysql servers setup and both of them are trying to start up at the same time, but I don't have the slightest idea on how to fix it. I've tried brew reinstalling but that didn't help. ? ~ which mysqladmin /usr/local/bin/mysqladmin ? ~ whence -p mysql /usr/local/bin/mysql

    Read the article

  • Configuration issue with HttpRealipModule (CloudFlare) in nginx configuration file

    - by Tyrx
    I've been attempting to use HttpRealipModule with the CloudFlare IP range in my main nginx configuration file but upon restarting nginx I'll just get a standard `"configuration file /etc/nginx/nginx.conf test failed" and my site will go down. This is what I've been attempting to do with my nginx.conf; user www-data; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { # Basic Settings set_real_ip_from 204.93.240.0/24; set_real_ip_from 204.93.177.0/24; set_real_ip_from 199.27.128.0/21; set_real_ip_from 173.245.48.0/20; set_real_ip_from 103.22.200.0/22; set_real_ip_from 141.101.64.0/18; set_real_ip_from 108.162.192.0/18; set_real_ip_from 190.93.240.0/20; set_real_ip_from 188.114.96.0/20; set_real_ip_from 2400:cb00::/32; set_real_ip_from 2606:4700::/32; set_real_ip_from 2803:f800::/32; real_ip_header CF-Connecting-IP; client_max_body_size 50m; client_header_timeout 5; keepalive_timeout 5; port_in_redirect off; sendfile on; server_tokens off; server_name_in_redirect off; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; # MIME include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log warn; # Gzip Settings gzip on; gzip_disable "msie6"; gzip_min_length 1400; gzip_types text/plain text/css text/javascript text/xml application/x-javascript application/xml application/xml+rss; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } What's wrong with that configuration file?

    Read the article

  • Varnish configuration to only cache for non-logged in users

    - by davidsmalley
    I have a Ruby on Rails application fronted by varnish+nginx. As most of the sites content is static unless you are a logged in user, I want to cache the site heavily with varnish when a user is logged out but only to cache static assets when they are logged in. When a user is logged in they will have the cookie 'user_credentials' present in their Cookie: header, in addition I need to skip caching on /login and /sessions in order that a user can get their 'user_credentials' cookie in the first place. Rails by default does not set a cache friendly Cache-control header, but my application sets a "public,s-max-age=60" header when a user is not logged in. Nginx is set to return 'far future' expires headers for all static assets. The configuration I have at the moment is totally bypassing the cache for everything when logged in, including static assets — and is returning cache MISS for everything when logged out. I've spent hours going around in circles and here is my current default.vcl director rails_director round-robin { { .backend = { .host = "xxx.xxx.xxx.xxx"; .port = "http"; .probe = { .url = "/lbcheck/lbuptest"; .timeout = 0.3 s; .window = 8; .threshold = 3; } } } } sub vcl_recv { if (req.url ~ "^/login") { pipe; } if (req.url ~ "^/sessions") { pipe; } # The regex used here matches the standard rails cache buster urls # e.g. /images/an-image.png?1234567 if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.cookie; lookup; } else { if (req.http.cookie ~ "user_credentials") { pipe; } } # Only cache GET and HEAD requests if (req.request != "GET" && req.request != "HEAD") { pipe; } } sub vcl_fetch { if (req.url ~ "^/login") { pass; } if (req.url ~ "^/sessions") { pass; } if (req.http.cookie ~ "user_credentials") { pass; } else { unset req.http.Set-Cookie; } # cache CSS and JS files if (req.url ~ "\.(css|js|jpg|jpeg|gif|ico|png)\??\d*$") { unset req.http.Set-Cookie; } if (obj.status >=400 && obj.status <500) { error 404 "File not found"; } if (obj.status >=500 && obj.status <600) { error 503 "File is Temporarily Unavailable"; } } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Cache = "HIT"; } else { set resp.http.X-Cache = "MISS"; } }

    Read the article

  • Why doesn't Apache start from xampp control panel after changes to vhosts config?

    - by Grafica
    I'm running xampp on my local server, and want to host multiple sites, so I changed the httpd-vhosts.conf file. Will somebody let me know if there is something wrong with my code? Apache was running while I had only one site in the config, but after I added another site, I stopped apache, and I'm not able to restart it. # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # ##NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host.localhost" ##ServerName dummy-host.localhost ##ServerAlias www.dummy-host.localhost ##ErrorLog "logs/dummy-host.localhost-error.log" ##CustomLog "logs/dummy-host.localhost-access.log" combined ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host2.localhost" ##ServerName dummy-host2.localhost ##ServerAlias www.dummy-host2.localhost ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined ##</VirtualHost> NameVirtualHost * <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName evamagnus.com <Directory "C:\xampp\htdocs\"> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs2\" ServerName mygrafica.com <Directory "C:\xampp\htdocs2\"> Order allow,deny Allow from all </Directory> </VirtualHost> Here is what it says in the control panel: 2:17:37 PM [apache] Starting apache service... 2:17:38 PM [apache] Status change detected: running 2:17:39 PM [apache] Status change detected: stopped Thanks in advance.

    Read the article

  • nginx proxypath https redirect fails without trailing slash

    - by Thermionix
    I'm trying to setup Nginx to forward requests to several backend services using proxy_pass. The links on the pages that lack trailing slashes do have https:// in front, but get redirected to a http request with a trailing slash - which ends in connection refused - I only want these services to be available through https. So if a link is too https://example.com/internal/errorlogs in a browser when loaded https://example.com/internal/errorlogs gives Error Code 10061: Connection refused (it redirects to http://example.com/internal/errorlogs/) If I manually append the trialing slash https://example.com/internal/errorlogs/ it loads I've tried with varied trailing forward slashes appended to the proxypath and location in proxy.conf to no effect, have also added server_name_in_redirect off; This happens on more than one app under nginx, and works in apache reverse proxy Config files; proxy.conf location /internal { proxy_pass http://localhost:8081/internal; include proxy.inc; } .... more entries .... sites-enabled/main server { listen 443; server_name example.com; server_name_in_redirect off; include proxy.conf; ssl on; } proxy.inc proxy_connect_timeout 59s; proxy_send_timeout 600; proxy_read_timeout 600; proxy_buffer_size 64k; proxy_buffers 16 32k; proxy_pass_header Set-Cookie; proxy_redirect off; proxy_hide_header Vary; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; proxy_set_header Accept-Encoding ''; proxy_ignore_headers Cache-Control Expires; proxy_set_header Referer $http_referer; proxy_set_header Host $host; proxy_set_header Cookie $http_cookie; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; curl output -$ curl -I -k https://example.com/internal/errorlogs/ HTTP/1.1 200 OK Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:07 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 14327 -$ curl -I -k https://example.com/internal/errorlogs HTTP/1.1 301 Moved Permanently Server: nginx/1.0.5 Date: Thu, 24 Nov 2011 23:32:11 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Content-Length: 127 Location: http://example.com/internal/errorlogs/

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • Cant access a remote server due mistake by setting firewall rule

    - by LMIT
    I need help due a my silly mistake! So for long time i have a dedicate server hosted by register.it Usually i access remotly to this server (Windows 2008 server) by Terminal Server. Today i wanted to block one site that continually send request to my server. So i was adding a new rule in the firewall (the native firewall on windows 2008 server), as i did many time, but this time, probably i was sleeping with my brain i add a general rules that stop everything! So i cant access to the server anymore, as no any users can browse the sites, nothing is working because this rule block everything. I know that is a silly mistake, no need to tell me :) so please what i can do ? The only 1 thing that my provider let me is reboot the server by his control panel, but this not help me in any way because the firewall block me again. i have administrator username and password, so what i really can do ? there are some trick some tecnique, some expert guru that can help me in this very bad situation ? UPDATE i follow the Tony suggest and i did a NMAP to check if some ports are open but look like all closed: NMAP RESULT Starting Nmap 6.00 ( http://nmap.org ) at 2012-05-29 22:32 W. Europe Daylight Time NSE: Loaded 93 scripts for scanning. NSE: Script Pre-scanning. Initiating Parallel DNS resolution of 1 host. at 22:32 Completed Parallel DNS resolution of 1 host. at 22:33, 13.00s elapsed Initiating SYN Stealth Scan at 22:33 Scanning xxx.xxx.xxx.xxx [1000 ports] SYN Stealth Scan Timing: About 29.00% done; ETC: 22:34 (0:01:16 remaining) SYN Stealth Scan Timing: About 58.00% done; ETC: 22:34 (0:00:44 remaining) Completed SYN Stealth Scan at 22:34, 104.39s elapsed (1000 total ports) Initiating Service scan at 22:34 Initiating OS detection (try #1) against xxx.xxx.xxx.xxx Retrying OS detection (try #2) against xxx.xxx.xxx.xxx Initiating Traceroute at 22:34 Completed Traceroute at 22:35, 6.27s elapsed Initiating Parallel DNS resolution of 11 hosts. at 22:35 Completed Parallel DNS resolution of 11 hosts. at 22:35, 13.00s elapsed NSE: Script scanning xxx.xxx.xxx.xxx. Initiating NSE at 22:35 Completed NSE at 22:35, 0.00s elapsed Nmap scan report for xxx.xxx.xxx.xxx Host is up. All 1000 scanned ports on xxx.xxx.xxx.xxx are filtered Too many fingerprints match this host to give specific OS details TRACEROUTE (using proto 1/icmp) HOP RTT ADDRESS 1 ... ... ... 13 ... 30 NSE: Script Post-scanning. Read data files from: D:\Program Files\Nmap OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 145.08 seconds Raw packets sent: 2116 (96.576KB) | Rcvd: 61 (4.082KB) Question: The provider locally can access by username and password ?

    Read the article

  • Gateway GT5220 Boot/POST Failure

    - by John Rudy
    I have a Gateway GT5220 I'm troubleshooting. It is, in fact, the machine I just gave my father for his birthday a couple months ago. (Prior to that, it was my home PC. My home PC is now the MacBook on which I'm writing this.) Before going any further, I suspect that the answer will be, "It's worse than that, it's dead, Jim, it's dead, Jim, it's dead, Jim." At least, mobo and/or CPU. The initial symptoms were as follows: Turn on power All fans fire up (thus making it so I can't hear if the hard drive is spinning or not, nor are my hands sensitive enough anymore to feel it) No LEDs remained lit on the front panel. (Initially, the hard drive indicator flashed briefly.) No beep, no video, no nothing. Following some advice I found here, I tried to "drain the stored power." After following those steps, the new symptoms were: Turn on power All fans fire up The front panel LEDs remained lit! After about 20, maybe 30 seconds, we had video! Sort of. We got to the Gateway splash/POST screen, which appeared thoroughly corrupted. How corrupted? Well, I imagine it's what a POST screen would look like after reading the wrong passage out of the Necronomicon: It stayed there. I gave it at least 5, maybe 6 minutes, and it didn't move. So I shut her down, started her up again, and now (this is where we currently stand, symptomatically) we have this: Turn on power All fans fire up The front panel LEDs remain lit No video, no beep, no nothing. I'm a software guy; haven't done real hardware troubleshooting in years. My gut tells me that the mobo and/or CPU is fried, and unfortunately my gut didn't get to be as big as it is being wrong all the time. :( In addition to the link above, I have read all of the following (trying to save you some LMGTFY trouble): Gateway Support POST Error Messages and Handling About a zillion (useless) POST beep code sites A kioskea.net post indicating that most likely we're at what I consider "total loss" (mobo and/or CPU) My questions: Are there any conditions other than mobo/CPU that could cause symptoms like these? Is it worth my time to try the next hardware troubleshooting step?(IE, remove all non-critical hardware from the machine, try to boot, systematically replace one by one until we find the failing component) Which mobos will fit in the Gateway GT5220 case (with rear ports correctly aligned)? (Why this is not a dupe: I wouldn't have posted this question if it hadn't been for the funkadelic possessed video display on the one occasion we got video out. I think that justified this not being an exact dupe. Of course, if the community overrules, I will understand.)

    Read the article

  • Apache2 return 404 for proxy requests before reaching WSGI

    - by Alejandro Mezcua
    I have a Django app running under Apache2 and mod_wsgi and, unfortunately, lots of requests trying to use the server as a proxy. The server is responding OK with 404 errors but the errors are generated by the Django (WSGI) app, which causes a high CPU usage. If I turn off the app and let Apache handle the response directly (send a 404), the CPU usage drops to almost 0 (mod_proxy is not enabled). Is there a way to configure Apache to respond directly to this kind of requests with an error before the request hits the WSGI app? I have seen that maybe mod_security would be an option, but I'd like to know if I can do it without it. EDIT. I'll explain it a bit more. In the logs I have lots of connections trying to use the server as a web proxy (e.g. connections like GET http://zzz.zzz/ HTTP/1.1 where zzz.zzz is an external domain, not mine). This requests are passed on to mod_wsgi which then return a 404 (as per my Django app). If I disable the app, as mod_proxy is disabled, Apache returns the error directly. What I'd finally like to do is prevent Apache from passing the request to the WSGI for invalid domains, that is, if the request is a proxy request, directly return the error and not execute the WSGI app. EDIT2. Here is the apache2 config, using VirtualHosts files in sites-enabled (i have removed email addresses and changed IPs to xxx, change the server alias to sample.sample.xxx). What I'd like is for Apache to reject any request that doesn't go to sample.sample.xxx with and error, that is, accept only relative requests to the server or fully qualified only to the actual ServerAlias. default: <VirtualHost *:80> ServerAdmin [email protected] ServerName X.X.X.X ServerAlias X.X.X.X DocumentRoot /var/www/default <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> ErrorDocument 404 "404" ErrorDocument 403 "403" ErrorDocument 500 "500" ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> actual host: <VirtualHost *:80> ErrorDocument 404 "404" ErrorDocument 403 "403" ErrorDocument 500 "500" WSGIScriptAlias / /var/www/sample.sample.xxx/django.wsgi ServerAdmin [email protected] ServerAlias sample.sample.xxx ServerName sample.sample.xxx CustomLog /var/www/sample.sample.xxx/log/sample.sample.xxx-access.log combined Alias /robots.txt /var/www/sample.sample.xxx/static/robots.txt Alias /favicon.ico /var/www/sample.sample.xxx/static/favicon.ico AliasMatch ^/([^/]*\.css) /var/www/sample.sample.xxx/static/$1 Alias /static/ /var/www/sample.sample.xxx/static/ Alias /media/ /var/www/sample.sample.xxx/media/ <Directory /var/www/sample.sample.xxx/static/> Order deny,allow Allow from all </Directory> <Directory /var/www/sample.sample.xxx/media/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • varnish3, mod_geoip with apache2 using mod_rewrite and mod_rpaf

    - by mursalat
    I am maintaining a website with 3 different versions of the site, with 3 different languages, handles with a single system written in php, which takes in environment variables based on the domain name that is being accessed. These are the three sites: myshop.com : english international version myshop.eu : european version of site myshop.ru : russian version of the site when myshop.com is accessed from russia it is to be redirected to myshop.ru, and any country from europe accesses myshop.com, is redirected to myshop.eu, and international visitors stay at myshop.com, although they can go to the country specific site. All these redirections for the country is done using GeoIP apache2 mod in order to determine the country code, which is used in a RewriteCondition to state a RewriteRule, there are some exceptions of IPs that do not do the rewrite for, basically the IPs of the developer's PCs. The site has been doing just fine, until we decided to setup varnish to give the site a boost, it really did give it a great boost, but the country specific rewrites has become buggy. What started to happen is that a russian visitor can go to myshop.com and won't be redirected, until he clicks a random link (perhaps a link not cached by varnish yet) and the user is redirected to their specific country. For that i setup mod_rpaf, and for exceptions to the rewrite rule (for the developer's ip), i used this RewriteCond %{HTTP:X-FORWARDED-FOR} !^43\.43\.43\.43, and i restarted varnish and apache2, it worked for a while, then it messed up again. And whole day i have been doing changes however i have little no clue as to what's going on, sometimes it works, and sometimes it doesn't, and sometimes it half works, etc... As for geoip, i used a php to check the $_SERVER variable, and here is the general idea of the output [HTTP_X_FORWARDED_FOR] => 43.43.43.44 [HTTP_X_VARNISH] => 1705675599 [SERVER_ADDR] => 127.0.0.1 [SERVER_PORT] => 80 [REMOTE_ADDR] => 43.43.43.44 [GEOIP_ADDR] => 43.43.43.44 [GEOIP_CONTINENT_CODE] => EU [GEOIP_COUNTRY_CODE] => FR [GEOIP_COUNTRY_NAME] => France Now, thanks to the "random" redirects, i hardly have a clue as to what is going on, so can you guys please give me some ideas as to what tools to use to debug this? I have tried to see the redirect logs, but they really dont show much, and varnishlog isn't helping much either - although i must admit i am no professional at varnish. I believe the problem is with varnish trying to cache the url, and thus apache redirects are not being done properly, however visits the site first has a redirect, and based on that other users are served the content, depending on from where the user was when the cache was last updated, is it correct? if so, how can i solve the problem? Also, i have the option of using geoip redirects on varnish3 instead of using apache2 to do the redirects, is that what the best practice is? Any suggestion as to debugging this or to fix this would be helpful! thnx!

    Read the article

  • Using OpenVPN, yet netflix.com blocks access

    - by user837848
    I have set up an OpenVPN server on a VPS in the USA and configured it to route all clients traffic through it. Everything seems to work fine regarding the VPN connection in gerneral. All ip lookup sites show me the us server's ip address and even hulu.com works(it won't work if you are not in the usa). But for some reason netflix.com says "Sorry, Netflix is not available in your country yet.". So I thought that netflix probably uses some more sophisticated ways to determine your location beyond just your ip address. But I could not find a way to get it to work until I dropped the idea of using a VPN and instead connected to the server via a simple socks tunnel with ssh by running: ssh -D 9999 user@serverip All I had to do was changing the key network.proxy.socks_remote_dns in Firefox from false to true to prevent DNS leaks and setting up the socks proxy. Then I could finally watch netflix.com. As a result I concluded that there is nothing in the browser(or something like system timezone) that tells netflix the location, so it has to have something to do with the OpenVPN config. After that I used tcpdump to log all the traffic on the server's network interface venet0 (OpenVZ VPS), visited netflix.com on the client while first connected to the VPN and then connected via socks tunnel and afterwards compared both outputs. The only thing that caught my eye was that while using the socks tunnel the server mainly used ipv6 to connect to netflix whereas it only used ipv4 when the client was connected to the OpenVPN server. But I don't get how that could make such a difference. So what am I missing? Is there a way to configure OpenVPN to also use ipv6 to connect to a website although there is only an ipv4 connection between the VPS and the client? Here is the server.conf of the OpenVPN server (OpenVZ VPS) local serverip port 443 proto tcp dev tun ca ./easy-rsa2/keys/ca.crt cert ./easy-rsa2/keys/vps1.crt key ./easy-rsa2/keys/vps1.key # This file should be kept secret dh ./easy-rsa2/keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" client-to-client keepalive 10 120 tls-auth ta.key 0 # This file is secret cipher AES-256-CBC comp-lzo max-clients 4 user nobody group nogroup persist-key persist-tun status openvpn-status.log log-append openvpn.log verb 3 iptables forwarding iptables -t nat -A POSTROUTING -s 10.8.0.0/24 -o venet0 -j SNAT --to-source serverip (enabled ipv4 forwarding) I have tried everything always on a Win7 and a Debian client with only ipv4 connections and always made sure that they use the correct DNS server (tested with ipleak.net and tcpdump / wireshark). client.conf: client dev tun proto tcp remote serverip 443 resolv-retry infinite nobind persist-key persist-tun ca ca.crt cert client.crt key client.key ns-cert-type server tls-auth ta.key 1 cipher AES-256-CBC comb-lzo verb 3

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >