Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 528/659 | < Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • Using %v in Apache LogFormat definition matches ServerName instead of specific vhost requested

    - by Graeme Donaldson
    We have an application which uses a DNS wildcard, i.e. *.app.example.com. We're using Apache 2.2 on Ubuntu Hardy. The relevant parts of the Apache config are as follows. In /etc/apache2/httpd.conf: LogFormat "%v %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog In /etc/apache2/sites-enabled/app.example.com: ServerName app.example.com ServerAlias *.app.example.com ... CustomLog "|/usr/sbin/vlogger -s access.log /var/log/apache2/vlogger" vlog Clients access this application using their own URL, e.g. company1.app.example.com, company2.app.example.com, etc. Previously, the %v in the LogFormat directive would match the hostname of the client request, and we'd get several subdirectories under /var/log/apache2/vlogger corresponding to the various client URLs in use. Now, %v appears to be matching the ServerName value, so we only get one log under /var/log/apache2/vlogger/app.example.com. This breaks our logfile analysis because the log file has no indication of which client the log relates to. I can fix this easily by changing the LogFormat to this: LogFormat "%{Host}i %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vlog This will use the HTTP Host: header to tell vlogger which subdirectory to create the logs in and everything will be fine. The only concern I have is that this has worked in the past and I can't find any indication that this has changed recently. Is anyone else using a similar config, i.e. wildcard + vlogger and using %v? Is it working fine?

    Read the article

  • dovecot login issue with plain passwords

    - by user3028
    I am having an odd problem in dovecot, the first time I try to login via telnet dovecot gives a error, the second time it works, both within the same telnet session. This is the telnet session, note the 'BAD Error in IMAP command received by server' and the "a OK" just after that : telnet 192.168.1.2 143 * OK Waiting for authentication process to respond.. * OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE STARTTLS AUTH=PLAIN] Dovecot ready. a login someUserLogin supersecretpassword * BAD Error in IMAP command received by server. a login someUserLogin supersecretpassword a OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE SORT SORT=DISPLAY THREAD=REFERENCES THREAD=REFS MULTIAPPEND UNSELECT CHILDREN NAMESPACE UIDPLUS LIST-EXTENDED I18NLEVEL=1 CONDSTORE QRESYNC ESEARCH ESORT SEARCHRES WITHIN CONTEXT=SEARCH LIST-STATUS] Logged in dovecot configuration >dovecot -n # 2.0.19: /etc/dovecot/dovecot.conf # OS: Linux 3.5.0-34-generic x86_64 Ubuntu 12.04.2 LTS auth_debug = yes auth_verbose = yes disable_plaintext_auth = no login_trusted_networks = 192.168.1.0/16 mail_location = maildir:~/Maildir passdb { driver = pam } protocols = " imap" ssl_cert = </etc/ssl/certs/dovecot.pem ssl_key = </etc/ssl/private/dovecot.pem userdb { driver = passwd } This is the log file: Jul 3 12:27:51 linuxServer dovecot: auth: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:27:51 linuxServer dovecot: auth: Debug: auth client connected (pid=23499) Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client in: AUTH#0111#011PLAIN#011service=imap#011secured#011no-penalty#011lip=192.168.1.2#011rip=192.169.1.3#011lport=143#011rport=50438#011resp=<hidden> Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: Loading modules from directory: /usr/lib/dovecot/modules/auth Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): lookup service=dovecot Jul 3 12:28:06 linuxServer dovecot: auth-worker: Debug: pam(someUserLogin,192.169.1.3): #1/1 style=1 msg=Password: Jul 3 12:28:06 linuxServer dovecot: auth: Debug: client out: OK#0111#011user=someUserLogin Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master in: REQUEST#0111823473665#01123499#0111#0113a58da53e091957d3cd306ac4114f0b9 Jul 3 12:28:06 linuxServer dovecot: auth: Debug: passwd(someUserLogin,192.169.1.3): lookup Jul 3 12:28:06 linuxServer dovecot: auth: Debug: master out: USER#0111823473665#011someUserLogin#011system_groups_user=someUserLogin#011uid=1000#011gid=1000#011home=/home/someUserLogin Jul 3 12:28:06 linuxServer dovecot: imap-login: Login: user=<someUserLogin>, method=PLAIN, rip=192.169.1.3, lip=192.168.1.2, mpid=23503, secured

    Read the article

  • Testing realistic loads for new versions of existing web app

    - by David Cournapeau
    Assuming I have a relatively complex web application, I am interested in testing performances of a new version using a traffic as realistic as possible. Traffic is relatively complex (session-based, lots of internal logic which depends on incoming requests). The webapp depends on many servers (databases, frontends, etc...). I can think of two basic directions: Recording every incoming request with its timestamp in production in a centralized manner and replaying it from N clients to reproduce a load as close as possible as the original. Issue: because we have many servers, getting the centralized log is not trivial. having a system duplicating requests to a staging area so that I could "plug" a dev version of my webapp to it at anytime without affecting the production. Issue: I have not found much information about it expect this, which suggests to me that may not be the best solution. OTOH, it is realistic by definition. What is the standard way of doing this kind of testing ? I did not find much information about load testing with complex, realistic traffic.

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • Nginx: Serve static files out of a given directory - one level too deep

    - by Joe J
    I'm pretty new to nginx configs. I'm having some difficulty with a pretty basic problem. I'd like to host some static files at /doc (index.html, some images, etc). The files are located in a directory called /sites/mysite/proj/doc/. The problem is, is that with the nginx config below, nginx tries to look for a directory called "/sites/mysite/proj/doc/doc". Perhaps this can be fixed by setting the root to /sites/mysite/proj/, but I don't want to potentially expose other (non-static) assets in the proj/ directory. And for various reasons, I can't really move the doc/ directory from where it is. I think there is a way to use a Rewrite rule to solve this situation, but I don't really understand all the parts, so having some difficulty formulating the rule. rewrite ^/doc/(.*)$ /$1 permanent; I've also included a working example of hosting files out of a /sites/mysite/htdocs/static/ directory. > vim locations.conf location /static { root /sites/mysite/htdocs/; access_log off; autoindex on; } location /doc { root /sites/mysite/proj/doc/; access_log on; autoindex on; } 2011/11/19 23:49:00 [error] 2314#0: *42 open() "/sites/mysite/proj/doc/doc" failed (2: No such file or directory), client: 100.100.100.100, server: , request: "GET /doc HTTP/1.1", host: "myhost.com" Does anyone have any ideas how I might go about serving this static content? Any help is much appreciated. Thanks, Joe

    Read the article

  • .NET not processing an XML file in IIS

    - by Stuart McIntosh
    We have 2 servers, 1 already configured with .net which works fine and a new one which appears to be configured the same but when I open an xml page in Internet Explorer it complains about the <% tag. We have IIS on win srvr 2003 SP2. The website is configured with .NET 1.1.4322. In ISAPI extensions have set the .XML extension to use c:\windows\microsoft.net\framework\v1.1.4322\aspnet_isapi.dll But the page: <property name="documentmaxage" value="0"/> <property name="documentmaxstale" value="0"/> <var name="m_Prompt_Path" /> <form id="InitVoiceXmlDoc"> <block> <assign name="m_Prompt_Path" expr="&quot;<% Response.Write(Request.QueryString["m_Prompt_Path"]); %>&quot;"/> </block> </form> gives the error: The XML page cannot be displayed Cannot view XML input using XSL style sheet. Please correct the error and then click the Refresh button, or try again later. The character '<' cannot be used in an attribute value. Error processing resource 'http://localhost:11119/fails.xml'. Lin... &quo... We have the same config on another server which works fine. So are there other options apart from the ISAPI extensions that I need to look at. If I suffix the page .aspx, of course it works fine.

    Read the article

  • Msg 10054, Level 20, State 0, Line 0 Error when altering a stored procedure to add a couple of curso

    - by doug_w
    We have a home-rolled backup stored procedure that uses xp_cmdshell to create and clean up database backups. We have an instance that is 2005 sp3 that we are trying to deploy this script to. I am at a bit of a loss for why it is not working. When I execute the create it runs for about 30 seconds and yields the following error: Msg 10054, Level 20, State 0, Line 0 A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) In my tinkering I discovered that by removing the cursors that actually do the work it will allow me to create the stored procedure (not very helpful for me though). If I add the cursors back in using an alter the error returns. I would be curious if someone has experienced this problem and knows of a solution or work around. I am not opposed to posting the source, it is just lengthy. Things I have checked: Error Logs No dump files in the log directory Thanks in advance for the help.

    Read the article

  • How to remove static IP from Mitel 5312 and enable DHCP

    - by jimbo
    I'm not sure this is the right forum for this question -- although I'm confident I'll be told if not! -- but I've read the fine manual (at least, such a manual as I have), I've googled and I cannot get any insight into where to even start solving this problem. I have a bunch of Mitel 5312 handsets, talking to a 3300 ICP controller. Some handsets are at a remote location, get an address from my DHCP server over there, and use the Mitel "Teleworker" extension to connect in over the Internet. The remaining handsets were set up with static IPs by a BT-supplied engineer, on the same subnet as the ICP itself. So far, so good. I have one remaining teleworker licence, and need to move a handset from the home location to the remote. I've managed to boot it and configure teleworker, but I cannot for the life of me see where I tell it to forget its static IP, and make a DHCP request. Any ideas? Should I be looking on the controller, or holding magic combinations of buttons on the handset itself? EDIT: Following some advice from Robert, below, I've broken out a spare device and reassigned the profile for this user's extension to the MAC of the new phone, and a new profile to the old MAC. Unfortunately this still doesn't get me anywhere -- the new handset now asks for the teleworker install password. I suspect I'm going to have to get a Mitel engineer involved here, since I've never been given that password... Unless anyone has any great ideas?

    Read the article

  • Using public interfaces on a server connected through a GRE tunnel

    - by Evan
    I'm pretty new to networking so please forgive any terminology mistakes. I have 2 servers connected with a GRE tunnel. Server1 (10.0.0.1) ---- Server2 (10.0.0.2) I want to be able to bind to the public IPs on Server2 using Server1. To do this, I setup virtual interfaces with Server2's public IPs on Server1 and then used routing rules on Server1 to route the packets through the GRE tunnel. On Server1: ip rule add from [Server2's first public IP] table gre ip rule add from [Server2's second public IP] table gre ip route add default via 10.0.0.2 dev gre1 table gre This works great and I can see the packets arriving via GRE on Server2. I can see the packet exiting the tunnel on Server2's gre1 device as shown: From Server1: ping -I [Server2's public ip] google.com tcpdump from Server2's GRE tunnel device: 12:07:17.029160 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) [Server2's public ip] > 74.125.225.38: ICMP echo request, id 6378, seq 50, length 64 This is exactly the packet I want. However, I'm not seeing it go out at all on eth0:0 (where Server2's public IP is bound to). I've tried to use routing rules to get packets coming from Server2's public IP (which would be coming out of dev gre1) to go through dev eth0 on the public default gateway and that doesn't work either. I'm at a loss, thank you to anyone who can help.

    Read the article

  • openSSL tutorial not fully working - Can sign but cannot restore original file

    - by djechelon
    I'm writing, and testing, a little tutorial for my groupmates involved in an openSSL homework. We have a bunch of PDF files, I'm the CA and each one should send me a signed PDF for me to be verified. I've told them to do the following (and tried to do it by myself) Request and obtain a certificate (I'll skip this part) Create a MIME message with the PDF file in it makemime -c "text/pdf" -a "Content-Disposition: attachment; filename=”Elaborato.pdf" Elaborato.pdf > Elaborato.pdf.msg Sign with openSSL openssl smime -sign -in Elaborato.pdf.msg -out Elaborato.pdf.p7m -certfile ca.pem -certfile nomegruppo.crt -inkey nomegruppo.key -signer nomegruppo.crt Verify with openssl smime -verify -in Elaborato.pdf.p7m -out Elaborato-verified.msg -CAfile ca.pem -signer nomegruppo.crt Extract attachment with munpack Elaborato-verified.msg View with Acrobat Reader The problem is that even if I get a file that (from its binary content) resembles a PDF file my current Ubuntu PDF viewer doesn't read it. The XXXElaborato.pdf extracted by munpack is a little bit smaller than the original. What's the problem with this procedure? In theory, they should send me the signed S/MIME message and I should be able to read the PDF within it. Why can't I restore the original content of the PDF file?

    Read the article

  • FreeRADIUS Default Answer

    - by jinanwow
    We are using FreeRADIUS with a MySQL database, authenticating users. We ran into an issue where are MySQL database was slow causing the max number of threads to be reached. The issue with this is, when the server couldn't answer the requests as there were no threads avaiable, it sent the response of Access-Reject to the clients. Our devices cache client connections and periodically checks with the server to see if they should still be allowed or to remove them. The equipment is designed that if there is no response from the server and a client is connected it will remain connected. The issue is, when the radius server is at its max threads, its default answer is to send access-reject (verified via packet capture), however we would like to change the default behavior to just ignore the request (keeping the clients connected). We have fixed the MySQL database issue for now, but I would like to change the default from Access-Reject, to just ignore the client altogeather. I have done research, but not able to find an answer to the question. Thanks in Advance.

    Read the article

  • Diagnosing Microsoft SQL Server error 9001: The log for the database is not available.

    - by Scott Mitchell
    Over the weekend a website I run stopped functioning, recording the following error in the Event Viewer each time a request is made to the website: Event ID: 9001 The log for database 'database name' is not available. Check the event log for related error messages. Resolve any errors and restart the database. The website is hosted on a dedicated server, so I am able to RDP into the server and poke around. The LDF file for the database exists in the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA folder, but attempting to do any work with the database from Management Studio results in a dialog box reporting the same error - 9001: The log for database is not available... This is the first time I've received this error, and I've been hosting this site (and others) on this dedicated web server for over two years now. It is my understanding that this error indicates a corrupt log file. I was able to get the website back online by Detaching the database and then restoring a backup from a couple days ago, but my concern is that this error is indicative of a more sinister problem, namely a hard drive failure. I emailed support at the web hosting company and this was their reply: There doesn't appear to be any other indications of the cause in the Event Log, so it's possible that the log was corrupted. Currently the memory's resources is at 87%, which also may have an impact but is unlikely. Can the log just "become corrupted?" My question: What are the next steps I should take to diagnose this problem? How can I determine if this is, indeed, a hardware problem? And if it is, are there any options beyond replacing the disk? Thanks

    Read the article

  • IIS8 Asp.net State service remote connection failure

    - by maxisam
    Recently we upgrade our web server to windows server 2012 with IIS8. We have this issue when users try to connect the asp.net state service to this web server remotely. It always popup Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. In IIS7 / 7.5 we use the same way and it works fine. As long as the state service is running and firewall is set properly, we don't have any problem. However, in IIS8 it doesn't work. (We even turn off firewall to test it) Thanks for helping.

    Read the article

  • Sun Grid Engine (SGE) / limiting simultaneous array job sub-tasks

    - by wfaulk
    I am installing a Sun Grid Engine environment and I have a scheduler limit that I can't quite figure out how to implement. My users will create array jobs that have hundreds of sub-tasks. I would like to be able to limit those jobs to only running a set number of tasks at the same time, independent of other jobs. Like I might have one array job that I want to run 20 tasks at a time, and another I want to run 50 tasks at a time, and yet another that I'm fine running without limit. It seems like this ought to be doable, but I can't figure it out. There's a max_aj_instances configuration option, but that appears to apply globally to all array jobs. I can't see any way to use consumable resources, as I'd need a "complex attribute" that is per-job, and that feature doesn't seem to exist. It didn't look like resource quotas would work, but now I'm not so sure of that. It says "A resource quota set defines a maximum resource quota for a particular job request", but it's unclear if an array job's sub-tasks' resource requests will be aggregated for the purposes of the resource quota. I'm going to play with this, but hopefully someone already knows outright.

    Read the article

  • Sharepoint Central Administration stuck / high CPU usage

    - by johnnyb10
    I'm using WSS 3 and I recently added a new web application to my SharePoint Server. After adding it, I wasn't able to open the Central Administration site. I also noticed that there was a w3wp.exe error (Event ID 1000) in the Event Viewer. The situation now is that the w3wp.exe process is hovering around 50% CPU usage continuously. I installed a program called IIS Peek, and it shows continuous GET requests on the Central Administration site; this happens even if I stop the Central Administration site in IIS. The IP addresses identified in the GET request is my workstation, which is what I used to attempt to access Central Administration after I created the new web application. Can someone explain what's going on and how I might fix it? It seems as if my computer tried to access Central Administration and then it hung, but the page requests that were happening at the time are somehow continuing over and over again. So my two problems are the inability to access Central Administration, and the CPU Usage of w3wp.exe, which I'm assuming are two symptoms of the same problem. I'd like to know if there's anything I can do besides restarting IIS, because we have clients accessing other sites on this server. Thanks.

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • Does MySQL log successful or attempted queries?

    - by Nathan Long
    I'm trying to track down a hit-or-miss bug in a web application. Sometimes a request completes just fine; sometimes it hangs and never finishes. I see that Apache now has several requests listed on the server-status page as "sending reply," and that doesn't change. I'm testing on localhost, so there shouldn't ever be more than one. Out of curiosity, I set MySQL to log all queries and I'm tail -fing the log file. When things go OK, I see a pattern like this: 20 Connect root@localhost on dbname 20 Query (some query #1) 20 Query (some query #2) (etc) 20 Quit 21 Connect (etc) When it hangs, I see a pattern like this: 22 Connect root@localhost on dbname 22 Query (some query #1) //nothing happens, so I try the post again 23 Connect root@localhost on dbname 23 Query (some query #1) //nothing happens; try again 24 Connect (etc) Here's my question: is MySQL logging attempted queries, or successful queries? In other words, if the last line I see is query #1, does that imply that query #1 or query #2 is hanging? My guess is that the one I don't see is the problem, because the last one I see looks fine, but maybe the one I don't see is too screwed-up for MySQL to process. Thoughts?

    Read the article

  • Domain workstation acting up and I can't track it down.

    - by DevNULL
    I have a developer with a Windows XP (SP2) 64 bit machine. If the machine is left on overnight (or any period of time longer than 5-6 hours) it takes 2-3 minutes to open any local drive and his network drives are no longer accessible. Here's what the system logs report... Any Help BTW: The problem just started a week ago and nothing has changed on the domain controller / AD or his machine. --- ERROR 1 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5719 Date: 6/8/2010 Time: 9:17:26 AM User: N/A Computer: BFC1 Description: This computer was not able to set up a secure session with a domain controller in domain UR due to the following: There are currently no logon servers available to service the logon request. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. ADDITIONAL INFO If this computer is a domain controller for the specified domain, it sets up the secure session to the primary domain controller emulator in the specified domain. Otherwise, this computer sets up the secure session to any domain controller in the specified domain. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 5e 00 00 c0 ^..A --- ERROR 2 The machine-default permission settings do not grant Local Activation permission for the COM Server application with CLSID {555F3418-D99E-4E51-800A-6E89CFD8B1D7} to the user NT AUTHORITY\LOCAL SERVICE SID (S-1-5-19). This security permission can be modified using the Component Services administrative tool. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. --- ERROR 3 Event Type: Error Event Source: RemoteAccess Event Category: None Event ID: 20106 Date: 6/8/2010 Time: 10:12:18 AM User: N/A Computer: BFC1 Description: Unable to add the interface {E76F0A78-7A0B-4EBB-A081-BA3BD452FC4C} with the Router Manager for the IP protocol. The following error occurred: Cannot complete this function. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: eb 03 00 00 e...

    Read the article

  • InstantSSL's certificate no different than a self signed certificate under Nginx with an IP accessed address

    - by Absolute0
    I ordered an ssl certificate from InstantSSL and got the following pair of files: my_ip.ca-bundle, my_ip.crt I also previously generated my own key and crt files using openssl. I concatenated all the crt files: cat my_previously_generted.crt my_ip.ca_bundle my_ip.crt chained.crt And configured nginx as follows: server { ... listen 443; ssl on; ssl_certificate /home/dmsf/csr/chained.crt; ssl_certificate_key /home/dmsf/csr/csr.nopass.key; ... } I don't have a domain name as per the clients request. When I open the browser with https://my_ip chrome gives me this error: The site's security certificate is not trusted! You attempted to reach my_ip, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site.

    Read the article

  • Nginx configuration question

    - by Pockata
    Hey guys, i'm trying to make the autoindex feature only run for my ip address with this code: server{ ... autoindex off; ... if ($remote_addr ~ ..*.*) { autoindex on; } ... } But it doesn't work. It gives my a 403 :/ Can someone help me :) Btw, i'm using Debian Lenny and Nginx 0.6 :) EDIT: Here's my full configuration: server { listen 80; server_name site.com; server_name_in_redirect off; client_max_body_size 4M; server_tokens off; # log_subrequest on; autoindex off; # expires max; error_page 500 502 503 504 /var/www/nginx-default/50x.html; # error_page 404 /404.html; set $myhome /bla/bla; set $myroot $myhome/public; set $mysubd $myhome/subdomains; log_format new_log '$remote_addr - $remote_user [$time_local] $request ' '"$status" "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Star nginx :@ access_log /bla/bla/logs/access.log new_log; error_log /bla/bla/logs/error.log; if ($remote_addr ~ 94.156.58.138) { autoindex on; } # Subdomains if ($host ~* (.*)\.site\.org$) { set $myroot $mysubd/$1; } # Static files # location ~* \.(jpg|jpeg|gif|css|png|js|ico)$ { # access_log off; # expires 30d; # } location / { root $myroot; index index.php index.html index.htm; } # PHP location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $myroot$fastcgi_script_name; include fastcgi_params; } # .Htaccess location ~ /\.ht { deny all; } } I forgot to mention that when i add the code to remove static files from my access log, the static files cannot be accessed. I don't know if it's relevant :)

    Read the article

  • Courier IMAP always disconnects since update

    - by Raffael Luthiger
    Since one of our customers updated their server courier does not handle IMAP connections properly any more. POP3 works without any problems. When I try to test IMAP with telnet then it is always like this: $ telnet domain.com 143 Trying 188.40.46.214... Connected to domain.com. Escape character is '^]'. * OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information. 01 LOGIN [email protected] test Connection closed by foreign host. I enabled debugging in the authdaemond but the output does not really help much: Apr 12 23:10:04 servername authdaemond: received auth request, service=imap, authtype=login Apr 12 23:10:04 servername authdaemond: authmysql: trying this module Apr 12 23:10:04 servername authdaemond: SQL query: SELECT login, password, "", uid, gid, homedir, maildir, quota, "", concat('disableimap=',disableimap,',disablepop3=',disablepop3) FROM mail_user WHERE login = '[email protected]' Apr 12 23:10:04 servername authdaemond: password matches successfully Apr 12 23:10:04 servername authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Apr 12 23:10:04 servername authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/vmail, [email protected], fullname=<null>, maildir=/var/vmail/domain.com/test, quota=0, options=disableimap=n,disablepop3=n Right after the "Authenticated" line the output stops. There is no other message. And in no other log file I've checked I could find any other related message. The system was updated from Ubuntu 10.10 to 12.04. How could I get more information? Or does anybody have an idea what could go wrong here?

    Read the article

  • IP Blacklists and suspicious inbound and outbound traffic

    - by Pantelis Sopasakis
    I administer a web server and recently we had our IP banned (!) from our host after they received a notification e-mail for abuse. In particular our server is allegedly involved in spam attacks over HTTP. The content of the abuse report email we received was not much informative - for example the IP addresses our server is supposed to have attacked against are not included - so I started a wireshark session checking for suspicious traffic over TCP/HTTP while trying to locate possible security holes on the system. (Let me note that the machine runs a Debian OS). Here is an example of such a request... Source: 89.74.188.233 Destination: 12.34.56.78 // my ip Protocol: HTTP Info: GET 'http://www.media.apniworld.com/image.php?type=hv' HTTP/1.0 I manually blacklisted this host (as well as some other ones) blocking them with iptables, but I can't keep on doing manually all day long... I'm looking for an automated way to block such IPs based on: Statistical analysis, pattern recognition or other AI-based analysis (Though, I'm reluctant to trust such a solution, if exists) Public blacklists Using DNSBL I actually found out that 89.74.188.233 is blacklisted. However other IPs which are strongly suspicious like 93.199.112.126 (i.e. http://www.pornstarnetwork.com/account/signin), unfortunately were not blacklisted! What I would like to do is to automatically connect my firewall with DNSBL (or some other blacklist database) and block all traffic towards blacklisted IPs or somehow have my local blacklist automatically updated.

    Read the article

  • Is timeout in tracertoutput an indication of an error?

    - by nitramk
    TCP/IP packages sent from my computer to a remote server does not always reach destination and ends up being retransmitted sometimes several times before they succeed. To troubleshoot this, I'm running a tracert to the server: Tracing route to <site> [<address>] Over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms mymachine 2 <1 ms <1 ms <1 ms gw.levonline.com [217.70.32.30] 3 <1 ms <1 ms <1 ms 81.201.213.218 4 <1 ms <1 ms <1 ms bmf1-hmf1.driften.net [81.201.213.12] 5 <1 ms <1 ms <1 ms 10ge-2-4-cr2.a1.sth.ownit.se [84.246.88.157] 6 <1 ms * <1 ms netnod-ix-ge-b-sth-4470.microsoft.com [195.69.11.181] 7 26 ms * * ge-3-0-0-0.ams-64cb-1a.ntwk.msn.net [207.46.42.1] 8 48 ms 57 ms 56 ms ten9-1.lts-76e-1.ntwk.msn.net [207.46.42.133] 9 * * * Request timed out. In step 6 and 7, I'm seeing timeouts while waiting for the reply from the server (as seen above). Running the same tracert many times gives varying output, sometimes the response is fine, but sometimes I get this timeout 1, 2 and sometimes for all 3 packets. The timeout always starts at the same server, netnod-ix-ge-b-sth-4470.microsoft.com. I've tried setting the tracert timeout to 10 seconds, but am still getting the timeout. Running tracert towards other servers does not give me the same timeout. Microsoft network technicians tells me that the problem is not on "their" side. Are these timeouts an indicator of a lost packet on the specific node which did not respond? Are the timeouts an indication of there being a problem, or is it normal?

    Read the article

< Previous Page | 524 525 526 527 528 529 530 531 532 533 534 535  | Next Page >