Search Results

Search found 3358 results on 135 pages for 'ssl'.

Page 72/135 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Slash after domain in URL missing for Rails site

    - by joshee
    After redirecting users in a Rails app, for some reason the slash after the domain is missing. Generated URLs are invalid and I'm forced to manually correct them. The problem only occurs on a subdomain. On a different primary domain (same server), everything works ok. For example, after logging out, the site is directing to https://www.sub.domain.comlogin/ rather than https://www.sub.domain.com/login I suspect the issue has something to do with the vhost setup, but I'm not sure. Here are the broken and working vhosts: BROKEN SUBDOMAIN <VirtualHost *:80> ServerName www.sub.domain.com ServerAlias sub.domain.com Redirect permanent / https://www.sub.domain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.sub.domain.com ServerAlias sub.domain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.sub.domain.com/current/public/ <Directory "/home/usr/www/www.sub.domain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> WORKING PRIMARY DOMAIN <VirtualHost *:80> ServerName www.diffdomain.com ServerAlias diffdomain.com Redirect permanent / https://www.diffdomain.com </VirtualHost> <VirtualHost *:443> ServerAdmin [email protected] ServerName www.diffdomain.com ServerAlias diffdomain.com ServerAlias *.diffdomain.com RailsEnv production # SSL Engine Switch SSLEngine on # SSL Cipher Suite: SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL # Server Certificate SSLCertificateFile /path/to/server.crt # Server Private Key SSLCertificateKeyFile /path/to/server.key # Set header to indentify https requests for Mongrel RequestHeader set X_FORWARDED_PROTO "https" BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 DocumentRoot /home/usr/www/www.diffdomain.com/current/public/ <Directory "/home/usr/www/www.diffdomain.com/current/public"> AllowOverride all Allow from all Options -MultiViews </Directory> </VirtualHost> Please let me know if there's anything else I could provide that would help determine what's wrong here. UPDATE tried adding a trailing slash to the redirect command, but still no luck.

    Read the article

  • How to change the mail domain server so it's not displaying IP? Changing [email protected] to [email protected]

    - by Pavel
    Hi guys. I'm kinda a noob as a server admin so please bear with me. I've installed postfix mail server and everything is working fine but the 'from' box is displaying [email protected]. I want to set it up so it displays domainname.com instead of IP. I just hope you know what I mean. My main.cf in postfix folder looks like this: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = mail.thevinylfactory alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mail.thevinylfactory.com, thevinylfactory, localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all Can anyone help me with this one? If you need any more details please let me know. Thanks in advance!

    Read the article

  • Conditional cPanel Forwarder

    - by Wireblue
    We have many clients on a cPanel server, some of which also have SSL certificates setup. Each year when the renewals are sent to webmaster@ their domain we would like a copy of those emails so we can install the SSL certifciate for them and issue invoices etc. So I'm wondering if there is a way we can selectively or conditionally forward certain emails if they match certain rules? (in cPanel) I'm thinking: If an email is sent to "webmaster@domain" and the subject contains "ssl", then forward to "[email protected]". Any ideas? Thanks in advance, Wireblue

    Read the article

  • redirecting HTTPS requests to http in lighttpd

    - by chochim
    I have a lighttpd server running which has an SSL certificate installed. I would, due to certain reasons, like to forward all https: //www. requests to http: //www. My lighttpd code looks like as follows: $SERVER["socket"] == ":443" { ssl.engine = "enable" ssl.pemfile = "/path/to/pem/file" ssl.ca-file = "/path/to/ca/file" HTTP["host"] =~ "^www\.(.*)$" { url.redirect = ("^/(.*)" => "http://www.%1$1") } } Can you please point out the problem here. Another thing, what is the difference between %1 and $1 ?

    Read the article

  • configure domino web server to use internet site document?

    - by kasper_341
    internet site configurations view has - security options - Accept SSL site certificates: default is NO Accept expired SSL certificates: default is Yes question: how does this effect server behaviour ? e.g. if i change the default behaviour -Accept SSL site certificates to yes then what effect will it have on server ? i hope the questions is clear enough, if not please let me know i will rephrase it. thanks

    Read the article

  • redirect wildcard subdomains to https (nginx)

    - by whatWhat
    I've got a wildcard ssl certification and I'm trying to redirect all non-ssl traffic to ssl. Currently I'm using the following for redirection the non-subdomainded url which is working fine. server { listen 80; server_name mydomain.com; #Rewrite all nonssl requests to ssl. rewrite ^ https://$server_name$request_uri? permanent; } when I do the same thing for *.mydomain.com it logically redirects to https://%2A.mydomain.com/ How do you redirect all subdomains to their https equivalent?

    Read the article

  • How do I configure postfix starttls

    - by Michael Temeschinko
    I need to install postfix on my webserver couse I need to use sendmail for my website. I only need to send mail not recieve or relay. send with starttls (port 587) via relay smtp.strato.de here is what happens Jul 15 00:02:38 negrita postfix/smtp[7120]: Host offered STARTTLS: [smtp.strato.de] Jul 15 00:02:38 negrita postfix/smtp[7120]: C717A181252: to=<[email protected]>, relay=smtp.strato.de[81.169.145.133]:587, delay=0.31, delays=0.09/0/0.16/0.04, dsn=5.7.0, status=bounced (host smtp.strato.de[81.169.145.133] said: 530 5.7.0 Bitte konfigurieren Sie ihr E-Mailprogramm fuer Authentifizierung am SMTP Server, wie auf www.strato.de/email-hilfe beschrieben. - Please configure your mail client for using SMTP Server Authentication (in reply to MAIL FROM command)) Jul 15 00:02:38 negrita postfix/cleanup[7118]: 29F5F181254: message-id=<20120714220238.29F5F181254@negrita> Jul 15 00:02:38 negrita postfix/qmgr[7102]: 29F5F181254: from=<>, size=2548, nrcpt=1 (queue active) Jul 15 00:02:38 negrita postfix/bounce[7121]: C717A181252: sender non-delivery notification: 29F5F181254 Jul 15 00:02:38 negrita postfix/qmgr[7102]: C717A181252: removed Jul 15 00:02:39 negrita postfix/local[7122]: 29F5F181254: to=<michael@negrita>, relay=local, delay=1.1, delays=0.04/0/0/1.1, dsn=2.0.0, status=sent (delivered to command: procmail -a "$EXTENSION") Jul 15 00:02:39 negrita postfix/qmgr[7102]: 29F5F181254: removed Jul 15 08:05:18 negrita postfix/master[1083]: daemon started -- version 2.9.1, configuration /etc/postfix Jul 15 08:05:29 negrita postfix/master[1083]: reload -- version 2.9.1, configuration /etc/postfix and my config michael@negrita:~$ postconf -n biff = no config_directory = /etc/postfix delay_warning_time = 4h home_mailbox = /home/michael/Maildir/ html_directory = /usr/share/doc/postfix/html inet_interfaces = localhost mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 mydomain = example.com myhostname = negrita mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 notify_classes = resource, software, protocol readme_directory = /usr/share/doc/postfix recipient_delimiter = + relayhost = [smtp.strato.de]:587 smtp_sasl_password_maps = hash:/etc/postfix/sasl/passwd smtp_tls_enforce_peername = no smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes soft_bounce = yes the user and password is ok couse I can send mail with my thunderbird thanks in advance mike

    Read the article

  • OCS 2007 Access Edge Server Certificate issue

    - by BWCA
    We are currently building additional OCS 2007 R2 Access Edge Servers to handle additional capacity.  We ran into a SSL certificate issue when we were setting up the servers. Before running the steps to Deploy an Edge Server, we successfully imported our SSL certificate that we use for external access on all of the new servers.  After successfully completing the first three Deploy Edge Server steps one one of the new servers, we started working on Step 4: Configure Certificates for the Edge Server.  After selecting Assign an existing certificate from the common tasks list and clicking Next to select a certificate, there were no certificates listed as shown below.   The first thing we did was to use the Certificates mmc snap-in to review the SSL certificate information.  We noticed in the General tab that Windows does not have enough information to verify this certificate and in the Certification Path that the issuer of this certificate could not be found for the SSL certificate that we imported successfully earlier.     While troubleshooting, we learned that we could not access the URL for the certificate’s CRL to download the CRL file due to restrictive firewall rules between the new OCS 2007 R2 Access Edge Servers and the Internet. After modifying the firewall rules, we were able to download the CRL file and when we reran Step 4 to assign an existing certificate, the certificate was listed.

    Read the article

  • HTTPS To http redirect issue. How to overcome?

    - by Akshay
    Have already seen suggests on how to rewrite https to http. currently using this technique : RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^(.*)$ http://www.mysite.com/$1 [L,R=301] RewriteRule ^(.*)$ http ://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] Problem : I am currently on hostgator VPS and have found Google indexing my HTTPS pages. Weird for me as never bought an SSL. My site is a blog only. When spoke to Google forums, ( https://productforums.google.com/forum/#!msg/webmasters/2Hz46t44nwk/7voZWudFtAQJ ), they say I should redirect https to http. Now when I have redirected this using the above method, I am still getting SSL warning in browsers. And found that Google is still indexing my new pages with https. I feel as I do not have an SSL, adding a redirect in https doesn't work. So if Google indexes my https page, then I should go and buy SSL, and tell there to redirect https to http. Why would I do that? Please help me, reduced the traffic by nearly 30% because of this. Have even told search engines to go to this file (disallows everything) if they are on https. Options +FollowSymlinks RewriteCond %{SERVER_PORT} ^443$ RewriteRule ^robots.txt$ robots_ssl.txt

    Read the article

  • Alternative to google map api, so that I can use it on a HTTPS/SSL encrypted website.

    - by Zeeshan Rang
    I did find a solution for this on Google map api page, and I made the following changes as mentioned in it. 1.Use Google Maps API for Flash version 1.9a or later. 2.Add the following to your Flash application before the map is instantiated: Security.allowInsecureDomain("maps.googleapis.com"); Ref:http://code.google.com/apis/maps/faq.html#flash_ssl My code looks like this, after the changes: <mx:TitleWindow verticalAlign="middle" horizontalAlign="center" xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:maps="com.google.maps.*" width="1000" height="600" layout="absolute" backgroundAlpha="0" borderAlpha="0" borderThickness="0" showCloseButton="true" close="PopUpManager.removePopUp(this);"> <mx:VBox width="70%" height="100%" > <maps:Map id="map" key="ABQIAAAA0L1JEoR6rWjh-BBQnLMtMBSVuZ5VlaqlIqiYPFMK_I5M2UTmHhSq_BJxLHiYcTDW9RxSF6HewNY7uA" mapevent_mapready="onMapReady(event)" width="100%" height="100%" /> </mx:VBox> <mx:Script> <![CDATA[ //import flashx.textLayout.formats.Direction; import mx.effects.AddItemAction; //import flashx.textLayout.factory.TruncationOptions; import mx.controls.Alert; import mx.managers.PopUpManager; import mx.rpc.events.ResultEvent; import com.adobe.serialization.json.JSON; import flash.events.Event; import com.google.maps.*; import com.google.maps.overlays.*; import com.google.maps.services.*; import com.google.maps.controls.ZoomControl; import com.google.maps.controls.PositionControl; import com.google.maps.controls.MapTypeControl; import com.google.maps.services.ClientGeocoderOptions; import com.google.maps.LatLng; import com.google.maps.Map; import com.google.maps.MapEvent; import com.google.maps.MapMouseEvent; import com.google.maps.MapType; import com.google.maps.services.ClientGeocoder; import com.google.maps.services.GeocodingEvent; import com.google.maps.overlays.Marker; import com.google.maps.overlays.MarkerOptions; import com.google.maps.InfoWindowOptions; private function onMapReady(event:MapEvent):void { Security.allowInsecureDomain("maps.googleapis.com"); map.setCenter(new LatLng(41.651505,-72.094455), 13, MapType.NORMAL_MAP_TYPE); map.addControl(new ZoomControl()); map.addControl(new PositionControl()); map.addControl(new MapTypeControl()); map.enableScrollWheelZoom(); map.enableContinuousZoom(); } ]]> </mx:Script> </mx:TitleWindow> But i still get the following error using this: The requested URL /mapsapi/publicapi?file=flashapi&url=https%3A%2F%2Fvirtual.c7beta.com%2Findex_cloud.swf&key=ABQIAAAA0L1JEoR6rWjh-BBQnLMtMBTW_Qkp6J0z76Etz3qzo8Hg3HdUQhSnD6lqp53NB0UrBmg5Xm2DlazWqA&v=1.18&flc=xt was not found on this server. Any suggestions to what am I doing wrong here, what should i do to make this work. Regards zee

    Read the article

  • Silverlight, WCF service, integrated security AND ssl/https not possible?

    - by Flores
    I have this setup that works perfectly when using http. A silverlight 3 client .net 4 WCF service hosted in IIS with basicHttpBinding and using integrated security on the site When setting https to required on the website the setup stops working. Using the wcftestclient on the uri I get the message: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Negotiate,NTLM'. The remote server returned an error: (401) Unauthorized. Maybe this makes sense because the wcftestclient does not pass credentials? in the web.config the security mode for the service binding is set is set to 'Transport'. The silverlight client is created like this: BasicHttpBinding basicHttpBinding = new BasicHttpBinding(); basicHttpBinding.Security.Mode = BasicHttpSecurityMode.Transport; var serviceClient = new ImportServiceClient(basicHttpBinding, serviceAddress); The service address is ofcourse starting with https:// And the silverlight client reports this error: The provided URI scheme 'https' is invalid; expected 'http'. Parameter name: via Remember, swithing it back to http (and setting security mode to 'TransportCredentialOnly' makes everything working again. Is the setup I want even supported? If so, how should it be configured?

    Read the article

  • How do I ignore an "invalid" SSL certificate in Objective-C?

    - by ipwnstuff
    Currently I have: NSArray* array = [NSArray arrayWithObjects:@"auth.login",@"username",@"password", nil]; NSData* packed_array = [array messagePack]; NSURL* url = [NSURL URLWithString:@"https://192.168.1.149:3790/api/1.0"]; NSMutableURLRequest* request = [NSMutableURLRequest requestWithURL:url]; [request setHTTPMethod:@"POST"]; [request setValue:@"RPC Server" forHTTPHeaderField:@"Host"]; [request setValue:@"binary/message-pack" forHTTPHeaderField:@"Content-Type"]; [request setValue:[NSString stringWithFormat:@"%d",[packed_array length]] forHTTPHeaderField:@"Content-Length"]; [request setHTTPBody:packed_array]; NSURLResponse *response; NSError *error; responseData = [NSMutableData dataWithData:[NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]]; NSLog(@"response data: %@",[responseData messagePackParse]); NSLog(@"error: %@",error); - (BOOL)connection:(NSURLConnection *)connection canAuthenticateAgainstProtectionSpace:(NSURLProtectionSpace *)protectionSpace { NSLog(@"called canAuthenticateAgainstProtectionSpace"); return [protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]; } - (void)connection:(NSURLConnection *)connection didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge { NSLog(@"called didReceiveAuthenticationChallenge"); [challenge.sender useCredential:[NSURLCredential credentialForTrust:challenge.protectionSpace.serverTrust] forAuthenticationChallenge:challenge]; } Which returns "Error Domain=NSURLErrorDomain Code=-1202 "The certificate for this server is invalid"…" How should I be implementing the answer from this question?

    Read the article

  • Problem setting up Master-Master Replication in MySQL

    - by Andrew
    I am attempting to setup Master-Master Replication on two MySQL database servers. I have followed the steps in this guide, but it fails in the middle of Step 4 with SHOW MASTER STATUS; It simply returns an empty set. I get the same 3 errors in both servers' logs. MySQL errors on SQL1: [ERROR] Failed to open the relay log './sql1-relay-bin.000001' (relay_log_pos 4) [ERROR] Could not find target log during relay log initialization [ERROR] Failed to initialize the master info structure MySQL Errors on SQL2: [ERROR] Failed to open the relay log './sql2-relay-bin.000001' (relay_log_pos 4) [ERROR] Could not find target log during relay log initialization [ERROR] Failed to initialize the master info structure The errors make no sense because I'm not referencing those files in any of my configurations. I'm using Ubuntu Server 10.04 x64 and my configuration files are copied below. I don't know where to go from here or how to troubleshoot this. Please help. Thanks. /etc/mysql/my.cnf on SQL1: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = <SQL1's IP> # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. server-id = 1 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 1 master-host = <SQL2's IP> master-user = slave_user master-password = "slave_password" master-connect-retry = 60 replicate-do-db = db1 log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = db1 binlog-ignore-db = mysql relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ /etc/mysql/my.cnf on SQL2: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = <SQL2's IP> # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. server-id = 2 replicate-same-server-id = 0 auto-increment-increment = 2 auto-increment-offset = 2 master-host = <SQL1's IP> master-user = slave_user master-password = "slave_password" master-connect-retry = 60 replicate-do-db = db1 log-bin= /var/log/mysql/mysql-bin.log binlog-do-db = db1 binlog-ignore-db = mysql relay-log = /var/lib/mysql/slave-relay.log relay-log-index = /var/lib/mysql/slave-relay-log.index expire_logs_days = 10 max_binlog_size = 500M # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • Exception on SslStream.AuthenticateAsClient (The message was badly formatted)

    - by Noms
    I have got wierd problem going on. I am trying to connect to Apple server via TCP/SSL. I am using a Client certificate provided by Apple for push notifications. I installed the certificate on my server (Win2k3) in both Local Trusted Root certificates and Local Personal Certificates folder. Now I have a class library that deals with that connection, when i call this class library from a console application running from the server it works absolutely fine, but when i call that class library from an asp.net page or asmx web service I get the following exception. A call to SSPI failed, see inner exception. The message received was unexpected or badly formatted. This is my code: X509Certificate cert = new X509Certificate(certificateLocation, certificatePassword); X509CertificateCollection certCollection = new X509CertificateCollection(new X509Certificate[1] { cert }); // OPEN the new SSL Stream SslStream ssl = new SslStream(client.GetStream(), false, new RemoteCertificateValidationCallback(ValidateServerCertificate), null); ssl.AuthenticateAsClient(ipAddress, certCollection, SslProtocols.Default, false); ssl.AuthenticateAsClient is where the error gets thrown. This is driving me nuts. If the console application can connect fine, there must be some problem with asp.net network layer security that is failing the authentication... not sure, perhaps need to add something or some sort of security policy in the web.config. Also just to point out that i can connect fine on my local development machine both with console and website. Anyone has got any ideas?

    Read the article

  • Android: Download the Android SDK components for offline install

    - by Tawani
    Is it possible to download the Android SDK components for offline install without using the SDK Manager? The problem is I am behind a firewall which I have no control over and both sites download URLs seem to be blocked (throws a connection refused exception) https://dl-ssl.google.com/android/repository/repository.xml http://dl-ssl.google.com/android/repository/repository.xml Failed to fetch URL http://dl-ssl.google.com/android/repository/repository.xml, reason: Connection refused: connect

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Compiling OpenSSL for boost asio for Microsoft Visual Studio 2010

    - by user560106
    I compiled boost with bjam, and then I compiled OpenSSL. Both of them work separately. I set up the links in Visual Studio 10 to point to my OpenSSL library directory. But when I attempt to compile example boost ssl asio programs I get 44 unresolved external linker errors like this one: 1testing.obj : error LNK2019: unresolved external symbol _SSLv23_server_method referenced in function "public: void __thiscall boost::asio::ssl::detail::openssl_context_service::create(struct ssl_ctx_st * &,enum boost::asio::ssl::context_base::method)" (?create@openssl_context_service@detail@ssl@asio@boost@@QAEXAAPAUssl_ctx_st@@W4method@context_base@345@@Z) Can you please give me step-by-step instructions on properly linking OpenSSL to boost? Thank you so much

    Read the article

  • Configuring zend to use gmail smtp: Windows Apache dev-environment: "Could not open socket" error - repeatedly - going mad

    - by confused
    My dev environment is Win XP SP2 / Apache 2.something PHP 5.something_or_other My prod env is Linux Ubuntu / Apache 2.something_else PHP 5.something_or_other_else The code is all Zend Framework Version: 1.11.1 I can telnet to: smtp.gmail.com 465 from the PC. I have Mercury configured on my PC to use gmail as it's smtp host and it works just fine. (MercuryC SMTP Client). Mercury is set to use port 465 and SSL on smtp.gmail.com -- No problem. Zend mail works just fine on my production environment using the production mail server to send out mail. It's the same basic application.ini but with different values in the mail variables. On my local PC dev setup, my application.ini contains: (same values as I use in Mercury) mail.templatePath = APPLICATION_PATH "/emails" mail.sender.name = "myAccount" mail.sender.email = "[email protected]" mail.host = smtp.gmail.com mail.smtp.auth = "login" mail.smtp.username = "[email protected]" mail.smtp.password = "myPassWord" mail.smtp.ssl = "ssl" mail.smtp.port = 465 I have been doing trial and error for hours trying to get a single email out with no success. In every case, regardless of server or port settings it throws an error and reports: Could not open socket. Both Apache and Mercury Core are exceptions in my Windows Firewall config. Mercury seems to be having no problem. I have searched stackoverflow before posting this and have been googling for hours -- with no success. I am slowly losing my mind I would be very much obliged for any tip as to what might be wrong. Thanks for reading. =================== BTW When I use the SAME application.ini values on my local PC as on the production host, I get the same "Could not open socket" error. Those values are: mail.templatePath = APPLICATION_PATH "/emails" mail.sender.name = "otherUser" mail.sender.email = "[email protected]" mail.host = smtp.otherServer.com mail.smtp.auth = "login" mail.smtp.username = "[email protected]" mail.smtp.password = "otherPAssWord" mail.smtp.ssl = "ssl" mail.smtp.port = 465 I know these work in the production (Ubuntu) environment. I'm utterly baffled.

    Read the article

  • Stuck with luasec LUA secure socket

    - by PeterMmm
    This example code fails: require("socket") require("ssl") -- TLS/SSL server parameters local params = { mode = "server", protocol = "sslv23", key = "server.key", certificate = "server.crt", cafile = "server.key", password = "123456", verify = {"peer", "fail_if_no_peer_cert"}, options = {"all", "no_sslv2"}, ciphers = "ALL:!ADH:@STRENGTH", } local socket = require("socket") local server = socket.bind("*", 8888) local client = server:accept() client:settimeout(10) -- TLS/SSL initialization local conn,emsg = ssl.wrap(client, params) print(emsg) conn:dohandshake() -- conn:send("one line\n") conn:close() request https://localhost:8888/ output error loading CA locations ((null)) lua: a.lua:25: attempt to index local 'conn' (a nil value) stack traceback: a.lua:25: in main chunk [C]: ? Not very much info. Any idea how to trace down to the problem ?

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • Compiling Wanderlust for Windows and use it for Gmail.

    - by User1
    I'm trying to get Wanderlust working in Windows to connect to Gmail. Compiling the code is much more painful than expected. Here are the barriers so far: Can't download dependent packages: SEMI, APEL, and FLIM. I eventually found newer versions, but I'm not sure they will work. Anyone have the older versions? Needs make and install. I used MSYS and it seems to have compiled okay. SSL support. I was getting a "Cannot open load file: ssl" error. I found an ssl.el that comes with w3. So installed w3. Bash command in ssl.el: ssl-get-command is running something from /bin/sh (not a directory I have in Windows). I really don't want to refactor this code. Is there a better way? Others speak very highly of Wanderlust, so I want to give it a try. I feel like I'm almost there, but am pretty much worn out with all the crazy configuration I have to do. Does anyone have this working on Windows? I'm pretty sure it will work with Gmail, because of this post. But will it work in Windows too? If you have a few pointers, please help.

    Read the article

  • PHP imap_search not detecting all messages in gmail inbox

    - by Steve
    When I run a very simple imap_search on my GMail inbox, the search returns less messages than it should. Here is the script that anyone with a GMail account can run. $host = '{imap.gmail.com:993/imap/ssl}'; $user = 'foo'; $pass = 'bar'; $imapStream = imap_open($host,$user,$pass) or die(imap_last_error()); $messages = imap_search($imapStream,"ALL"); echo count($messages); imap_close($imapStream); This returns 39 messages. But, I've got 100 messages in my inbox, some in conversations, some forwarded from another account (SquirrelMail, FWIW). Can anyone duplicate these results, and/or tell me what's going on? Other server strings I've tried, all returning the same results: {imap.gmail.com:993/imap/ssl/novalidate-cert} {imap.gmail.com:993/imap/ssl/novalidate-cert}INBOX {imap.gmail.com:993/imap/ssl}INBOX GMail's IMAP feature support: http://mail.google.com/support/bin/answer.py?hl=en&answer=78761

    Read the article

  • Provider not notified from cookbook_file

    - by wittyhandle
    I'm working on an ssl provider using Vagrant (1.0.5) and chef-solo (10.12.0) I have my provider, called ssl within a cookbook called gtm_cq, I define it as such in my cookbook's default recipe: gtm_cq_ssl "author" do # attributes will come later end I then have my cookbook_file like below that should notify my ssl provider's import action once it pushes the cert up to the server: cookbook_file "#{node[:cq][:ssl][:author_cert_location]}/foo.cer" do source "foo.cer" owner "crx" group "root" mode "0644" notifies :import, resources(:gtm_cq_ssl => "author") end When I run this, the foo.cer gets pushed up as expected, but the import action of my ssl provider is never called. The most I see of any reference is these couple of lines in the log (removed log headers): .. cookbook_file[/opt/cq5/author/foo.cer] sending import action to gtm_cq_ssl[author] (delayed) .. Processing gtm_cq_ssl[author] action import (gtm_cq::author line 34) There's a large very obvious log statement as well as the use of another cookbook_file for a test file to push something up to the server. No log statement, no test file pushed. I'm certain too that the foo.cer file is removed from the server before each test. I found that if I edit my notifies line like so with :immediately notifies :import, resources(:gtm_cq_ssl => "author"), :immediately It seems to work. And I suppose this is ok in my particular case, but it would seem something is not right if that's the only way I can call my provider. Any help on this would be greatly appreciated. Thanks!

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >