Search Results

Search found 22893 results on 916 pages for 'client scripting'.

Page 863/916 | < Previous Page | 859 860 861 862 863 864 865 866 867 868 869 870  | Next Page >

  • Email server can send internal, but messages never arrive at external recipients

    - by Chase Florell
    I'm running MailEnable on my server, and have been for many years. Recently we had an attack on our server, and I was able to close the hole. Since then, our mail server doesn't seem to be sending mail out. If I send an email from myself to another account hosted on the server, the email arrives as expected. If I send an email from my gmail account to my business account, the email also arrives as expected The problem comes when I send from my business account to an external domain I tried the following Gmail.com Hotmail.com Shaw.ca When I send to any of the above The message leaves my client as expected, The logs appear to accept and forward on the message The SMTP outbound que is empty The message never arrives I have checked our domain with mxtoolbox.com senderbase.org And neither of them are reporting any problems with our domain. I have ensured that port 25 is open (along with the other standard ports) Here is one of the log entries from the SMTP connector 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 220 mx1.example.com ESMTP MailEnable Service, Version: 6.81--6.81 ready at 11/05/13 12:10:00 0 0 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 EHLO EHLO ASSP.nospam 250-mx1.example.com [127.0.0.1], this server offers 6 extensions 159 18 11/05/13 12:10:00 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 EHLO EHLO ASSP.nospam 250-mx1.example.com [127.0.0.1], this server offers 6 extensions 159 18 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH AUTH LOGIN 334 VXNlcm5hbWU6 18 12 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH {blank} 334 UGFzc3dvcmQ6 18 26 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 AUTH Y29sb25lbGZhY2U= 235 Authenticated 19 18 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 MAIL MAIL FROM:<[email protected]> 250 Requested mail action okay, completed 43 31 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 RCPT RCPT TO:<[email protected]> 250 Requested mail action okay, completed 43 35 [email protected] 11/05/13 12:10:01 SMTP-IN 494C0AF55CD0400FB90FD5E6525BC885.MAI 1312 127.0.0.1 DATA DATA 354 Start mail input; end with <CRLF>.<CRLF> 46 6 [email protected] Here are the headers of the sent message X-Assp-Version: 1.7.5.7(1.0.07) on ASSP.nospam X-Assp-ID: ASSP.nospam 78601-04523 X-Assp-Intended-For: [email protected] X-Assp-Envelope-From: [email protected] Received: from [10.10.1.101] ([68.147.245.149] helo=[10.10.1.101]) with IPv4:587 by ASSP.nospam; 5 Nov 2013 12:10:00 -0700 From: Chase Florell <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: 7bit Subject: Test Message Message-Id: <[email protected]> Date: Tue, 5 Nov 2013 12:10:18 -0700 To: Chase Florell <[email protected]> Mime-Version: 1.0 (Mac OS X Mail 7.0 \(1816\)) X-Mailer: Apple Mail (2.1816) . Where else can I check to see if there is something broken? What could cause a problem like this whereby the message appears to send, but never arrives, and never returns a bounce?

    Read the article

  • Installing OpenLDAP: ldap_bind: Invalid credentials (49)

    - by Arcturus
    Hello. I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • How do I combine static and dynamic DHCP leases on a Cisco router?

    - by Brad
    Basically, what I need is super similar to the unanswered cisco forum question below: https://supportforums.cisco.com/message/3139749#3139749 I have a Cisco 850 Series router. I have configured a DHCP pool for the 10.0.0.0/24 network. I have excluded 10.0.0.1 - 10.0.0.99 from the DHCP pool. I want to add a static DHCP pool for stuff and I want DHCP to statically assign them the addresses of my choice below 100. Actually, I don't care what addresses I statically assign. They can be anything in the pool for all I care, I just want it to work. Why are you doing this? Just statically assign the IPs on the devices! I don't want to do this because I have some laptop users. They could obviously only use that static IP here. This isn't a problem if they could be bothered to change any location setting or something. They can't. So it HAS to be DHCP. It also has to be static IPs because I need to forward ports to them. I know, I know, this is weird but it's an apartment LAN/WLAN so this isn't exactly a typical use case. Relevant sections of config below: ip dhcp excluded-address 10.0.0.1 10.0.0.99 ! ip dhcp pool Internal-net import all network 10.0.0.0 255.255.255.0 default-router 10.0.0.1 domain-name 1770.local lease 7 ! ip dhcp pool static-pool import all origin file flash://staticmap default-router 10.0.0.1 domain-name 1770.local Contents of staticmap: *time* Aug 5 2010 09:00 AM *version* 2 !IP address Type Hardware address Lease expiration 10.0.0.100/24 1 001f.5b3e.d50a Infinite *end* You can see here I was trying addresses outside the excluded-address range to see if that would make any difference. My testing machine's MAC: mainframe:~ brad$ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:3e:d5:0a What shows up in the DHCP binding table: basestar#show ip dhcp binding Bindings from all pools not associated with VRF: IP address Client-ID/ Lease expiration Type Hardware address/ User name 10.0.0.112 0100.1f5b.3ed5.0a Aug 12 2010 10:06 AM Automatic What's up with the funny looking MAC in the DHCP binding table?? Is what I'm trying to accomplish basically impossible? Am I going about this the wrong way? All I want to to be able to port forward some ports to specific devices. The way I would do this with a consumer router is to do what I'm trying to do here; assign static DHCP to those devices then configure PAT for ports on those addresses.

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • File/folder permissions and groups on Linux with Apache

    - by phobia
    I'm trying to learn about permissions on linux webserver with apache. Some clues to the system: The server I have to play around with is Fedora based. Apache runs as apache:apache. To allow for e.g. php to write to a file the file needs to be chmod 777. 755 is not sufficiant. What I'm wondering is basically how set up permissions like they should be on e.g. a "shared web host". My main problem is that if I set a permission so that one user cannot access anothers home folder, then apache can't read from the public_html folder either. To keep the users out I need to set chmod 700. But to let apache to read I need to have at least execute on world, so a 701 basically works, but won't let some users in. So I'm really stuck on what to do. Have been concidering adding the apache user to the frous grours below to avoid having to add the world execute flag, but is that a bad thing? Should it be the other way around, the users in the groups below should also be in the apache group? I was aiming at having 4 groups: 1. webapp same as dev_int, but is the only one that can go inside the webapp/live folder to e.g. do an update from the repo. 2. dev_int can read,write and execute everything in the "web root", including the two below, but nothing outside of the web root 3. dev_ext can read write and execute in all client folders, but cannot access anything outside of the webapp root 4. clientsBasic ftp accounts. Has a home folder with a public_html, but cannot access any other home folders An example of folder structure: webroot    no users in the aforementioned groups can go outside of here some_project    :dev_int only webapp live    :webapp only staging    :dev_int and :dev_ext clients    :dev_int and :dev_ext client_1    :dev_int, :dev_ext and client1:clients public_html dev developer_1    developer_1:dev_int OR :dev_ext public_html

    Read the article

  • Varnish, hide port number

    - by George Reith
    My set up is as follows: OS: CentOS 6.2 running on an OpenVZ virtual machine. Web server: Nginx listening on port 8080 Reverse proxy: Varnish listening on port 80 The problem is that Varnish redirects my requests to port 8080 and this appears in the address bar like so http://mysite.com:8080/directory/, causing relative links on the site to include the port number (8080) in the request and thus bypassing Varnish. The site is powered by WordPress. How do I allow Varnish to use Nginx as the backend on port 8080 without appending the port number to the address? Edit: Varnish is set up like so: I have told the Varnish daemon to listen to port 80 by default. VARNISH_VCL_CONF=/etc/varnish/default.vcl # # # Default address and port to bind to # # Blank address means all IPv4 and IPv6 interfaces, otherwise specify # # a host name, an IPv4 dotted quad, or an IPv6 address in brackets. # VARNISH_LISTEN_ADDRESS= VARNISH_LISTEN_PORT=80 # # # Telnet admin interface listen address and port VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1 VARNISH_ADMIN_LISTEN_PORT=6082 # # # Shared secret file for admin interface VARNISH_SECRET_FILE=/etc/varnish/secret # # # The minimum number of worker threads to start VARNISH_MIN_THREADS=1 # # # The Maximum number of worker threads to start VARNISH_MAX_THREADS=1000 # # # Idle timeout for worker threads VARNISH_THREAD_TIMEOUT=120 # # # Cache file location VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin # # # Cache file size: in bytes, optionally using k / M / G / T suffix, # # or in percentage of available disk space using the % suffix. VARNISH_STORAGE_SIZE=1G # # # Backend storage specification VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}" # # # Default TTL used when the backend does not specify one VARNISH_TTL=120 The VCL file that Varnish calls (through an include in default.vcl) consists of: backend playwithbits { .host = "127.0.0.1"; .port = "8080"; } acl purge { "127.0.0.1"; } sub vcl_recv { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { set req.backend = playwithbits; set req.http.Host = regsub(req.http.Host, ":[0-9]+", ""); if (req.request == "PURGE") { if (!client.ip ~ purge) { error 405 "Not allowed."; } return(lookup); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_hit { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { set obj.ttl = 0s; error 200 "Purged."; } } } sub vcl_miss { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.request == "PURGE") { error 404 "Not in cache."; } if (!(req.url ~ "wp-(login|admin)")) { unset req.http.cookie; } if (req.url ~ "^/[^?]+.(jpeg|jpg|png|gif|ico|js|css|txt|gz|zip|lzma|bz2|tgz|tbz|html|htm)(\?.|)$") { unset req.http.cookie; set req.url = regsub(req.url, "\?.$", ""); } if (req.url ~ "^/$") { unset req.http.cookie; } } } sub vcl_fetch { if (req.http.Host ~ "^(.*\.)?playwithbits\.com$") { if (req.url ~ "^/$") { unset beresp.http.set-cookie; } if (!(req.url ~ "wp-(login|admin)")) { unset beresp.http.set-cookie; } } }

    Read the article

  • How to configure Transparent IP Address Sharing (TAS) on a Mediatrix 4102 with DGW 2.0 firmware?

    - by Pascal Bourque
    I am making the switch to VoIP. I chose voip.ms as my service provider and Mediatrix 4102 as my ATA. One reason why I chose the Mediatrix over other popular consumer ATAs is that it's supposed to be easy to place it in front of the router, so it can give priority to its own upstream traffic over the home network's upstream traffic. This is supposed to work transparently, with the ATA and router sharing the same public IP address (the one obtained from the modem). They call this feaure Transparent IP Address Sharing, or TAS. Their promotional brochure describes it like this: The Mediatrix 4102 also uses its innovative TAS (Transparent IP Address Sharing) technology and an embedded PPPoE client to allow the PC (or router) connected to the second Ethernet port to have the same public IP address, eliminating the need for private IP addresses or address translations. I am interested by this feature because my router, an Apple Time Capsule, doesn't support QoS and cannot give priority to the voice packets if the ATA is behind the router. However, after hours of searching the web, reading the documentation, and good ol' trial and error, I haven't been able to configure the Mediatrix to run in this mode. Then I found a version of the manual that looks like it was for a previous version of the firmware (SIP), where there is an entire section dedicated to configuring TAS (starting at page 209). But my Mediatrix comes with the DGW 2.0 firmware, whose documentation does not mention TAS at all. So I tried to follow the TAS setup instructions from the SIP documentation and apply them to my DGW firmware, using the Variable Mapping Between SIP v5.0 and DGW v2.0 document as a reference, but no success. Some required SIP variables don't have an equivalent in DGW. So it looks like the DGW firmware does not support TAS at all, or if it does they are not doing anything to help us set it up. So right now, the Mediatrix is behind the router and VoIP works perfectly except when my upstream bandwidth is saturated. My questions are: Is downgrading to SIP firmware the only way to have my Mediatrix 4102 run in TAS mode? If not, anybody knows how to setup TAS on the DGW firmware? Is TAS mode the only way to give priority to the voice packets if I want to keep my current router (Apple Time Capsule)? Thanks!

    Read the article

  • PHPMyAdmin works with https Only (not http)

    - by 01010011
    Hi I've been having a problem getting phpmyadmin to work consistently on my XP desktop and laptop computers for months now. When I type into Chrome's browser on both machines, localhost/phpmyadmin, I kept getting Error #1045 Access Denied for user at root@localhost (using password yes). Eventually, I realized that I had two (2) versions of mysql installed (XAMPP and MySQL Server 5.1) on both machines. So I uninstalled the MySQL Server 5.1I from the desktop and phpmyadmin worked. But when I uninstalled MySQL Server 5.1 from my laptop, it did not work. But I realized I could still get into MySQL Commandline Client using my password and that my databases were still intact. So I uninstalled and reinstalled XAMPP on the laptop and phpmyadmin worked after that. Now I have a new problem. On phpMyAdmin's home page has a message at the bottom: Your configuration file contains settings (root with no password) that correspond to the default MySQL privileged account. Your MySQL server is running with this default, is open to intrusion, and you really should fix this security hole by setting a password for user 'root'. So I located the following lines in config.inc.php file: /* Authentication type and info */ $cfg['Servers'][$i]['auth_type'] = 'config'; $cfg['Servers'][$i]['user'] = 'root'; $cfg['Servers'][$i]['password'] = ''; $cfg['Servers'][$i]['AllowNoPassword'] = true; and I just changed the last 2 lines as follows: $cfg['Servers'][$i]['password'] = 'mypassword'; $cfg['Servers'][$i]['AllowNoPassword'] = false; As soon as I did that and I tried to access phpmyadmin again, I got the Error #1045 message again, but when I tried https://localhost/phpmyadmin/ I got a red page saying this sites certificate is not trusted would you like to proceed anyway. And now it only works using https. I would really like to settle all my phpmyadmin problems once and for all so here are my questions: 1. Why does my laptop only access phpmyadmin via https? 2. How do I change my password in my configuration file? Also, if you have any other tips regarding phpMyAdmin, they are very welcome. Thanks in advance

    Read the article

  • Sent items listed by sender's name (my name!) not recipient in Outlook for Mac 2011

    - by user568458
    Outlook 2011 on a Mac (OSX7 Lion), on a (mostly PC) office network using Microsoft Exchange server. In Outlook 2011, in the network "Sent Items" folder, all emails are listed showing the sender's name (my name), not the recipients' names. That means every single email is headed with my own name, like this: My Name 08/09/2012 Some subject line [flag] My Name 08/09/2012 Re: Another subject line [flag] My Name 07/09/2012 Re: different subject line [flag] My Name 07/09/2012 different subject line [flag] ...and so on. There's no clue at all as to who these emails were to, until I open each and every one. I guess it's reassuring to be told that every email that I sent was sent by me... and if I was ever to forget my own name while browsing my sent emails folder, it'd be super useful... But it's not very helpful for navigating sent emails. Is this normal for Office for Mac 2011? How can I fix this, so that the list shows the recipients' names instead of endlessly reminding me of my own name? Things I've found while researching this: This can also happen in Outlook for Windows. On Windows, it's easily fixed by resetting the field. That method doesn't work on Mac because the fields can't be selected separately. It seems this can also happen in Apple Mail too, and a lot of people seem stumped by this. I can't find anything on this specific to Outlook 2011 or Outlook for Mac in general. A simple guide on how to fix this would be the best answer, but I'd also welcome any knowledgable thoughts from Microsoft Exchange Server people on whether this sounds like an Outlook settings issue which I can fix on my machine, or some issue relating to how the Mac gets data from the server and network. The fact that Apple Mail users have encountered the same issue with no apparent fix makes me think the problem might be in the network rather than the mail client - but that's way beyond my limited knowledge of these things. I don't know whether the local ("On my computer") Sent Items folder has the same problem, as it's configured so that no emails except drafts are ever stored in these local folders. Drafts saved locally are listed by recipient as expected.

    Read the article

  • Why does my mail get marked as spam?

    - by schoen
    I Have the server "afspraakmanager.be". It matches everything not to be a spam server.(it isn't by the way): it has reverse dns, spf,dkim,... . But hotmail marks it as spam. I think the problem is the SPF/DKIM records. when i sent an email to my gmail it says: "Received-SPF: neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=2a02:348:8e:6048::1; Authentication-Results: mx.google.com; spf=neutral (google.com: 2a02:348:8e:6048::1 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected]; dkim=neutral (bad format) [email protected]" So i guess my SPF and DKIM records aren't set up right. But I also don't have a clue what is wrong with them. this is the zone file: ; zone file for afspraakmanager.be $ORIGIN afspraakmanager.be. $TTL 3600 @ 86400 IN SOA ns1.eurodns.com. hostmaster.eurodns.com. ( 2013102003 ; serial 86400 ; refresh 7200 ; retry 604800 ; expire 86400 ; minimum ) @ 86400 IN NS ns1.eurodns.com. @ 86400 IN NS ns2.eurodns.com. @ 86400 IN NS ns3.eurodns.com. @ 86400 IN NS ns4.eurodns.com. ; Mail Exchanger definition @ 600 IN MX 10 smtp ; IPv4 Address definition @ IN A 37.230.96.72 afspraakmanager.be 600 IN A 37.230.96.72 localhost 86400 IN A 127.0.0.1 smtp 600 IN A 37.230.96.72 www 600 IN A 37.230.96.72 ; Text definition default._domainkey 600 IN TXT "v=DKIM1\\; k=rsa\\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC6pvlZKnbSVXg1Bf3MF2l8xRrKPmqIw2i9Rn1yZ3HEny9qH1vyGXUjdv2O0aQbd5YShSGjtg5H/GedRMLpB0Qb+hBj1yGofOQTdcVtZZfj8qBY5Z7vEkhvtdaogQ0vLjgcwhg0BBuTewEkLxrl9IIzkPMZ1SCtM2Y0RtiUhg2cjQIDAQAB" ; Sender Policy Framework definition afspraakmanager.be 600 IN SPF "v=spf1 a mx ptr +all" The DKIM signature in the header: DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=afspraakmanager.be; s=mail; t=1382361029; bh=4pDpXBY8rCbX8+MfrklZzpQxaUsa3vSPUYjcDR3KAnU=; h=Date:From:To:Subject:From; b=SoBBaAlrueD8qID8txl2SBSqnZgN2lkPCdSPI/m7/YLezIcBedkgIX1NswYiZFl6Z AmF8dES73WUaaJjItVHSrdCJK2mJ/Az+vrgNsyk+GqZZ1YPiIlH3gqRrsguhoofXUX /gqLlqsLxqxkKKd9EbSzKRHuDGlJCLm5SlL8wnL0=

    Read the article

  • pptpd configuration

    - by Ian R.
    I would like a little help on configuring pptp so I can use my server as a vpn server since I have 10 ip's on it and I travel a lot so that would really help me and my partners. I managed to install everything needed but my vpn client fails to connect due to some reason that I cannot understand. I know there are 2 files in pptp that you're supposed to edit so I will post my 2 files here: /etc/ppp/pptpd-options name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 proxyarp nodefaultroute lock nobsdcomp /etc/pptpd.conf option /etc/ppp/pptpd-options logwtmp localip xx.158.177.231 remoteip xx.158.177.103,xx.158.177.116,xx.158.177.121,xx.158.177.124,xx.158.177.125,xx.158.177.131,xx.158.177.134,xx.158.177.139,xx.158.177.142,xx.158.177.145 interfaces file eth0 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.231 Bcast:xx.158.177.255 Mask:255.255.254.0 inet6 addr: xx80::216:3eff:fe51:31ba/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:56352 errors:0 dropped:0 overruns:0 frame:0 TX packets:3xx15 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4884030 (4.8 MB) TX bytes:6780974 (6.7 MB) Interrupt:16 eth0:1 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.103 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:2 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.116 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:3 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.121 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:4 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.124 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:5 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.125 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:6 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.131 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:7 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.134 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:8 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.139 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:9 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.142 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 eth0:10 Link encap:Ethernet HWaddr 00:16:3e:51:31:ba inet addr:xx.158.177.145 Bcast:xx.158.177.255 Mask:255.255.254.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:3 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:286 (286.0 B) TX bytes:286 (286.0 B)

    Read the article

  • Legacy VB6 application under Win7 SQL error

    - by Shial
    We have a rather unfortunate legacy application at work, written originally in VB6 it predates anybody in our IT department by at least 5 years. We have a contracted developer for ongoing maintenance and where he can he rewrites sections over into .NET code (Not sure about his techniques here, this is a side job for his regular work as an IBM engineer) the application works fine (such as it is) under windows XP. We have only a couple of Windows 7 machines mainly for testing and this application seems to run into a wall. Things like the background not loading and SQL errors. This is even running under administrator. Running an SQL trace from the ODBC control panel shows several interesting things. It makes a connection to the database successfully initially where it runs a query to determine if it is running the correct version. This query works fine. 558-1af0 ENTER SQLExecDirectW HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 BMS 558-1af0 EXIT SQLExecDirectW with return code 1 (SQL_SUCCESS_WITH_INFO) HSTMT 0x020D7548 WCHAR * 0x04C8F0F0 [ 115] "SELECT count(*) c FROM tblSoftwareVersion WHERE fldSoftwareVersion = '123456' AND fldSoftwareName = 'Application.VB'" SDWORD 115 It then seems to drop its connection and can't find the ODBC connection despite the fact its connecting to the same DB. From the trace it looks like it configures the connection then it starts firing off SQLFreeStmt to unbind and close out then when in the application and it tries to do its thing there is no connection. 558-1af0 ENTER SQLFreeStmt HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> BMS 558-1af0 EXIT SQLFreeStmt with return code 0 (SQL_SUCCESS) HSTMT 0x020D7548 UWORD 2 <SQL_UNBIND> Then this happens when I try to do something that pulls data 558-1af0 ENTER SQLDriverConnectW HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> BMS 558-1af0 EXIT SQLDriverConnectW with return code -1 (SQL_ERROR) HDBC 0x020DDA00 HWND 0x00000000 WCHAR * 0x73EF8634 [ -3] "******\ 0" SWORD -3 WCHAR * 0x73EF8634 SWORD -3 SWORD * 0x00000000 UWORD 0 <SQL_DRIVER_NOPROMPT> DIAG [IM002] [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified (0) Nearly all of my searching on this issue comes up with programming issues where the connection string has a problem. The only thing that is different in this particular scenario though is Windows 7, I know the connection string is fine since it works on the XP machines. The VB components are supposed to be still functional under Win7. My computer is running 32 bit win7 and my VP is running Win7 64 bit and both have the same problem so that can be ruled out. I have already tried reinstalling the SQL Native Client and the VB runtime as well as the application in question. Hopefully I can find a solution and not have to resort to using the XP VM.

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • Setting up a home server - what to use? (ZFS vs btrfs, BSD vs Linux, misc other requirements)

    - by monch1962
    I need to get all our home content off individual machines and onto a central server. What I'd like to have is the metaphorical "server under the stairs". Stuff we need: expandable storage. I want to be able to add extra disc as we go along, with minimal maintenance required. Currently we've got about 3Tb of files we need to host, and that's likely to grow by another Tb every 6-12 months based on recent history. I need to be able to add additional disc with minimal pain needs to store all the media (i.e. photos, video, music) we have, and run services to serve the various devices we have in the house to playback (e.g. DAAP so we can play stuff through iTunes, ccxstream so we can play stuff over XBMC). DAAP and ccxstream are needed now, but we also need to support new standards as they emerge (so a closed-box solution isn't going to work) RAID 5, or something broadly equivalent (e.g. RAID-Z) BitTorrent client ssh, NFS, Samba access snapshot capability (as in ZFS), so we can snapshot individual file systems regularly and rollback when my kids delete their school assignments the day before they're due... ability to recover quickly from power outages (it's not unusual for us to have power outages that last longer than our UPS' batteries) FOSS software a modern distributed version control system running on the box, such as Mercurial Stuff I'd like to have on the server, but can live without: PVR capability, so I could record TV to the box Web server. We currently run a small Web server on a very old box, and I'd ideally like to turn the old box off and move the content to the new server just to save some electricity Nagios + mrtg I've been looking at using a EEE Box as the server, primarily because I can get them cheap and they don't consume much power. The choice of OS and file system is more difficult, from what I've found: I've got most experience with various Linux distros, but am happy to use another Unix FreeBSD and OpenSolaris seem to be the best choices for hosting ZFS OpenSolaris' hardware support is nowhere near as good as e.g. Ubuntu btrfs, while looking very good, doesn't seem ready for prime-time yet ZFS doesn't let you (easily?) add new discs to a RAID5 or RAID-Z reading around, it seems that ZFS is a bit short of tools for recovering lost data At the moment, I'm leaning towards running FreeNAS+ZFS, but I'm concerned about the requirement to be able to add new disc on a fairly regular basis to an existing RAID-Z. Can anyone provide some recommendations, or share experiences? Thanks in advance

    Read the article

  • The ping response time doesn't reflect the real network response time

    - by yangchenyun
    I encountered a weird problem that the response time returned by ping is almost fixed at 98ms. Either I ping the gateway, or I ping a local host or a internet host. The response time is always around 98ms although the actual delay is obvious. However, the reverse ping (from a local machine to this host) works properly. The following is my route table and the result: route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 eth1 60.194.136.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 # ping the gateway ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=98.7 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=97.0 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=96.0 ms 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=94.9 ms 64 bytes from 192.168.1.1: icmp_req=5 ttl=64 time=94.0 ms ^C --- 192.168.1.1 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 94.030/96.149/98.744/1.673 ms #ping a local machine ping 192.168.1.88 PING 192.168.1.88 (192.168.1.88) 56(84) bytes of data. 64 bytes from 192.168.1.88: icmp_req=1 ttl=64 time=98.7 ms 64 bytes from 192.168.1.88: icmp_req=2 ttl=64 time=96.9 ms 64 bytes from 192.168.1.88: icmp_req=3 ttl=64 time=96.0 ms 64 bytes from 192.168.1.88: icmp_req=4 ttl=64 time=95.0 ms ^C --- 192.168.1.88 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 95.003/96.696/98.786/1.428 ms #ping a internet host ping google.com PING google.com (74.125.128.139) 56(84) bytes of data. 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=1 ttl=42 time=99.8 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=2 ttl=42 time=99.9 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=3 ttl=42 time=99.9 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=4 ttl=42 time=99.9 ms ^C64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=5 ttl=42 time=99.9 ms --- google.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 32799ms rtt min/avg/max/mdev = 99.862/99.925/99.944/0.284 ms I am running iperf to test the bandwidth, the rate is quite low for a LAN connection. iperf -c 192.168.1.87 -t 50 -i 10 -f M ------------------------------------------------------------ Client connecting to 192.168.1.87, TCP port 5001 TCP window size: 0.06 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.139 port 54697 connected with 192.168.1.87 port 5001 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 6.12 MBytes 0.61 MBytes/sec [ 4] 10.0-20.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 20.0-30.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 30.0-40.0 sec 6.25 MBytes 0.62 MBytes/sec [ 4] 40.0-50.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 0.0-50.1 sec 31.6 MBytes 0.63 MBytes/sec

    Read the article

  • Web Service gets unavailable after several concurrent calls

    - by Roman
    We are testing GoDaddy Virtual Data Center and came to a very strange issue when our web site gets unavailable. GoDaddy Support keeps saying the issue is in our web server settings, but looking at the result of our tests I doubt it. TEST ENVIRONMENT Virtual DataCenter with Windows hosted at GoDaddy.com. All servers have Windows Server 2008 R2 Datacenter, IIS 7. Server One with IP address 10.1.0.4 Server Two with IP address 10.1.0.3 Both servers are in private network not visible from outside. Port Forward with IP address 50.62.13.174. Port Forward is assigned to Server One TEST DESCRIPTION JMeter is used as a Client App to simulate 30 concurrent users sending 100 SOAP requests each. Interval between requests is 1 second. Http link used for testing: http://50.62.13.174/v2/webservices.asmx TEST ONE Test is run from a computer in our office. After JMeter starts running test, almost immediately, the link above becomes unavailable in a browser. After test completion, the link is not available in a browser for about 5 more minutes. Remote Desktop is working well, so we can connect to Server One remotely. After about 5 minutes since test completion, the link becomes available in a browser again. TEST TWO Test is run from Server Two (that is part of our virtual data center). Test works very well, no visible delays in processing. The link is available in a browser all the time. TEST THREE Test is run from Server One using localhost. The result is the same as in TEST TWO - no issues. TEST FOUR We repeated TEST ONE from other computers that we have located in different countries, all with the same result as TEST ONE. CONCLUSION As the test works well from Server Two, but does not work from outside our virtual data center, we feel there are issues with the network or its capacity. The whole behaviour looks like out requests from outside get stuck somewhere before reaching our virtual data center. Has anybody had similar issues in the past? Are there chances that something is wrong with our server settings?

    Read the article

  • Apache works on http and https, SVN only on http

    - by user27880
    I asked a question about this before, and got most of it fixed. If I switch off https redirect and go to http://mydomain.com/svn/test0, I get the authentication window popping up, and I can enter my AD credentials, and bingo. Switching https redirect back on, if I go to http://mydomain.com I am automatically redirected to https, which is what I want, and the 'CerntOS test page' pops up. Perfect. The problem occurs when I want to go to one of my test repos via https. Here is my httpd.conf file, with confidential information suitably hosed... === NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName svn.mycompany.com ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common Redirect permanent / https://svn.mycompany.com </VirtualHost> <VirtualHost svn.mycompany.com:443> SSLEngine On SSLCertificateFile /etc/httpd/ssl/wildcard.mycompany.com.crt SSLCertificateKeyFile /etc/httpd/ssl/wildcard.mycompany.com.key SSLCertificateChainFile /etc/httpd/ssl/intermediate.crt ServerName svn.mycompany.com ServerAdmin [email protected] ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common <Location /svn> DAV svn SVNParentPath /usr/local/subversion SVNListParentPath off AuthName "Subversion Repositories" # NT Logon Details Require valid-user AuthBasicProvider file ldap AuthType Basic AuthzLDAPAuthoritative off AuthUserFile /etc/httpd/conf/svnpasswd AuthName "Subversion Server II" AuthLDAPURL "ldap://our-pdc:389/OU=Company Name,DC=com,DC=co,DC=uk?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "DOMAIN\subversion" AuthLDAPBindPassword XXXXXXX AuthzSVNAccessFile /etc/httpd/conf/svnaccessfile </Location> </VirtualHost> === Now, in ssl_error_log, I get === ==> /etc/httpd/logs/ssl_error_log <== [Fri Nov 01 16:07:55 2013] [error] [client XXX.XXX.XXX.XXX] File does not exist: /var/www/html/svn === This comes from the DocumentRoot directive further up the httpd.conf file, which of course points to /var/www/html. I know that this location is wrong, but how can I get SVN to serve the repo? I tried an Alias directive as so .. Alias /svn /usr/local/subversion .. but this didn't work. I tried to alter the Location directive. That didn't work either. Can someone help? I sense that this is so close to being solved ... Thanks. Edit: apachectl -S output: [root@svn conf]# apachectl -S VirtualHost configuration: 127.0.0.1:443 svn.mycompany.com (/etc/httpd/conf/httpd.conf:1020) wildcard NameVirtualHosts and default servers: default:443 svn.mycompany.com (/etc/httpd/conf.d/ssl.conf:74) *:80 is a NameVirtualHost default server svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) port 80 namevhost svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) Syntax OK

    Read the article

  • Pinning based on origin of a reprepro repository.

    - by Shtééf
    I'm on Ubuntu 10.04, and trying to set up a repository using reprepro. I'd also like the pin everything in that repository to be preferred over anything else, even if packages are older versions. (It will only contain a select set of packages.) However, I cannot seem to get the pinning to work, and believe it has something to do with the repository side of things, rather than the apt configuration on the client. I've taken the following steps to set up my repository Installed a web server (my personal choice here is Cherokee), Created the directory /var/www/apt/, Created the file conf/distributions, like so: Origin: Shteef Label: Shteef Suite: lucid Version: 10.04 Codename: lucid Architectures: i386 amd64 source Components: main Description: My personal repository Ran reprepro export from the /var/www/apt/ directory. Now on any other machine, I can add this (empty) repository over HTTP to my /etc/apt/sources.list, and run apt-get update without any errors: Ign http://archive.lan lucid Release.gpg Ign http://archive.lan/apt/ lucid/main Translation-en_US Get:1 http://archive.lan lucid Release [2,244B] Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Hit http://archive.lan lucid/main Packages Hit http://archive.lan lucid/main Sources In my case, now I want to use an old version of Asterisk, namely Asterisk 1.4. I rebuilt the asterisk-1:1.4.21.2~dfsg-3ubuntu2.1 package from Ubuntu 9.04 (with some small changes to fix dependencies) and uploaded it to my repository. At this point I can see the new package in aptitude, but it naturally prefers the newer Asterisk 1.6 currently in the Ubuntu 10.04 repositories. To try and fix that, I have created /etc/apt/preferences.d/personal like so: Package: * Pin: release o=Shteef Pin-Priority: 1000 But when I try to install the asterisk package, it will still prefer the 1.6 version over my own 1.4 version. This is what apt-cache policy asterisk shows: asterisk: Installed: (none) Candidate: 1:1.6.2.5-0ubuntu1 Version table: 1:1.6.2.5-0ubuntu1 0 500 http://nl.archive.ubuntu.com/ubuntu/ lucid/universe Packages 1:1.4.21.2~dfsg-3ubuntu2.1shteef1 0 500 http://archive.lan/apt/ lucid/main Packages Clearly, it is not picking up my pin. In fact, when I run just apt-cache policy, I get the following: Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.lan/apt/ lucid/main Packages origin archive.lan 500 http://security.ubuntu.com/ubuntu/ lucid-security/multiverse Packages release v=10.04,o=Ubuntu,a=lucid-security,n=lucid,l=Ubuntu,c=multiverse origin security.ubuntu.com [...] Unlike Ubuntu's repository, apt doesn't seem to pick up a release-line at all for my own repository. I'm suspecting this is the cause why I can't pin on release o=Shteef in my preferences file. But I can't find any noticable difference between my repository's Release files and Ubuntu's that would cause this. Is there a step I've missed or mistake I've made in setting up my repository?

    Read the article

  • Updating the $PATH for running an command through SSH with LDAP user account

    - by Guillaume Bodi
    Hi all, I am setting up a Mac OSX 1.6 server to host Git repositories. As such we need to push commits to the server through SSH. The server has only an admin account and uses a user list from a LDAP server. Now, since it is accessing the server through a non interactive shell, git operations are not able to complete since git executables are not in the default path. As the users are network users, they do not have a local home folder. So I cannot use a ~/.bashrc and the like solution. I browsed over several articles here and there but could not get it working in a nice and clean setup. Here are the infos on the methods I gathered so far: I could update the default PATH environment to include the git executables folder. However, I could not manage to do it successfully. Updating /etc/paths didn't change anything and since it's not an interactive shell, /etc/profile and /etc/bashrc are ignored. From the ssh manpage, I read that a BASH_ENV variable can be set to get an optional script to be executed. However I cannot figure how to set it system wide on the server. If it needs to be set up on the client machine, this is not an acceptable solution. If someone has some info on how it is supposed to be done, please, by all means! I can fix this problem by creating a .bashrc with PATH correction in the system root (since all network users would start here as they do not have home). But it just feels wrong. Additionally, if we do create a home folder for an user, then the git command would fail again. I can install a third party application to set up hooks on the login and then run a script creating a home directory with the necessary path corrections. This smells like a backyard tinkering and duct tape solution. I can install a small script on the server and ForceCommand the sshd to this script on login. This script will then look for a command to execute ($SSH_ORIGINAL_COMMAND) and trigger a login shell to run this command, or just trigger a regular login shell for an interactive session. The full details of this method can be found here: http://marc.info/?l=git&m=121378876831164 The last one is the best method I found so far. Any suggestions on how to deal with this properly?

    Read the article

  • Weird IIS with Windows Authentication + IE problem

    - by Paulius Maruška
    Hello. I have a website running on IIS and using Windows Authentication. All users that are configured to get access to the site are form a AD domain (not local users). In the properties of a Website, I have set to use the AD domain as the realm. Now, when using Firefox, Safari or Chrome - Everything is fine. When the user tries to open the site, he get's the login box. he enters simply "username" and "password" (let's pretend that it's an actual login and password :P) and he get's into the site. When using IE, however, things get nasty. When the user tries to open the site - he get's the login box. User enters the "username" and "password" again, but those get rejected! And when the second time login box pops up - it has the username filled in as "web-server-domain-name\username" which is wrong, because web-server-domain-name is not the domain where all users reside (it's "ad-domain"). I've spent days trying to figure out what's going on... Note, that if I manually enter "ad-domain\username" - I get accepted into the site without problems. So, my guess is that IE sends wrong username if domain is not specified. Anyway, IE is the only browser that triggers this behavior! Is it possible to do a server-side fix? Maybe it's possible to somehow auto-map the users to AD users? If it's not solvable server-side - is there a client-side fix for this? Thank you. PS: I'm more of a programmer than a sys-admin, so configuring servers isn't the strong side of mine... :P UPDATE: @Evan: Yes, "Digest authentication for Windows domain servers" is also enabled. @Eric: IIS version is 6.0. The authentication methods enabled are: Integrated and digest - all other methods are disabled. As for the security log. I looked at it, when doing "username" and "password" login in Chrome/Firefox and when doing "ad-domain\username" and "password" login from IE - the generated log messages are the same (I see no difference, anyway). When entering "username" and "password" I don't see any errors in the security (or any other) log, so can't tell what method it's trying to use. UPDATE 2: As suggested by Eric in the comments - I played around with Fiddler... While playing with it, I noticed, that when "username" and "password" is entered in FF and IE - the "Authorization" header value (encrypted) sent by IE is longer (almost two times) than one sent by FF. I tried to disable Windows Integrated authentication and only leave the Digest enabled - that fixed the problem (meaning, IE used the right realm just like other browsers), but that caused bazillion other problems with my site, because with Digest - user impersonation on the server doesn't work (that causes problems, when connecting to database etc). Any ideas?

    Read the article

  • VPN pptp connection Unable to pass through linux iptables

    - by user221844
    I have set up a windows VPN server behind Linux - Ubuntu box that is working as firewall and proxy server. Now I want people from outside to be able to connect to the VPN server, but the connection is not being established and I get on the client an error 619. I have checked the problem on the internet and it seems a firewall issue. what should I do to make the connection established through the firewall? here is below the information about my setup Firewall-External-IF-IP: 172.16.1.100 Firewall-LAN-IF-IP: 192.168.1.1 VPN-Server-IP: 192.168.1.10 and below is my iptables file content: #Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *filter :INPUT ACCEPT [162000:140437619] :FORWARD ACCEPT [23282:27196133] :OUTPUT ACCEPT [185778:143961739] :LOGGING - [0:0] -A INPUT -p gre -j ACCEPT -A INPUT -s 192.168.1.10/32 -p tcp -m tcp --sport 1723 -j ACCEPT -A INPUT -s 192.168.1.10/32 -p udp -m udp --sport 1723 -j ACCEPT -A FORWARD -s 192.168.1.0/24 -o EXT_IF -j ACCEPT -A FORWARD -s 192.168.1.0/24 -i EXT_IF -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p tcp -m tcp --dport 1723 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p tcp -m tcp --sport 1723 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p gre -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p gre -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p gre -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p tcp -m tcp --dport 1723 -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p udp -m udp --dport 1723 -j ACCEPT COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *nat :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :FORWARD ACCEPT [8527694:6271564562] :OUTPUT ACCEPT [14748508:11899678536] :POSTROUTING ACCEPT [23271280:18170828012] COMMIT # Completed on Thu May 29 12:40:18 2014 hope that I find the solution here ....!! :(

    Read the article

  • Erratic DNS name resolution

    - by alex
    Hi all, We have a client we host a web for (blog.foobar.es). We do not manage foobar.es's DNS setup, we just told them to point blog.foobar.es to our web server's IP. We have noticed that sometimes we cannot browse to blog.foobar.es, but we can browse to other sites on that server. Troubleshooting a bit using host(1) yields something funny: $ host blog.foobar.es 8.8.8.8 Using domain server: Name: 8.8.8.8 Address: 8.8.8.8#53 Aliases: Host blog.foobar.es not found: 3(NXDOMAIN) , being 8.8.8.8 one of Google's public DNS servers. However, sometimes the same server resolves the name correctly (!). Another funny thing, is that our ISP's DNS servers sometimes say: $ host blog.foobar.es 80.58.61.250 Using domain server: Name: 80.58.61.250 Address: 80.58.61.250#53 Aliases: blog.foobar.es has address x.x.x.x Host blog.foobar.es not found: 3(NXDOMAIN) Which I don't really understand. I've dug around using dig(1), and have noticed they've set up a SOA record for foobar.es: $ dig foobar.es ; <<>> DiG 9.7.0-P1 <<>> foobar.es ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 59824 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;foobar.es. IN A ;; AUTHORITY SECTION: foobar.es. 86400 IN SOA dns1.provider.es. root.dns1.provider.es. 2011030301 86400 7200 2592000 172800 ;; Query time: 78 msec ;; SERVER: 80.58.61.250#53(80.58.61.250) ;; WHEN: Thu Mar 3 16:16:19 2011 ;; MSG SIZE rcvd: 78 ... which I'm completely unfamiliar with. Ideas? We can't really do much as we do not control DNS, but we'd like to point our clients in the right direction...

    Read the article

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by Richard Beier
    We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • Our VPS is being used as a Warez mule

    - by Mikuso
    The company I work for runs a series of ecommerce stores on a VPS. It's a WAMP stack, 50gb storage. We use an archaic piece of ecommerce software which operates almost entirely client-side. When an order is taken, it writes it to disk and then we schedule a task to download the orders once every 10 minutes. A few days ago, we ran out of disk space, which caused orders to fail to be written. I quickly hopped on to delete some old logs from the mailserver and freed up a couple of GB pretty quickly, but I wondered how we could fill up 50gb will nothing much more than logs. Turns out, we didn't. Hidden deep within the c:\System Volume Information directory, we have a stack of pirated videos, which seem to have appeared (looking at the timestamps) over the past three weeks. Porn, American Sports, Australian cooking shows. A very odd collection. Doesn't look like an individual's personal tastes - more like the VPS is being used as a mule. We have a 5-attempts and you're blocked policy on our FTP server (plus, there is no FTP account with access to that directory), and the windows user account has had it's password changed recently. The main avenues are sealed - and logs can verify that. I thought I'd watch and see if it happened again, and yes, another cooking show has appeared this morning. I am the only one to know of this problem at my company, and only one of two with access to the VPS (the other being my boss, but no - it's not him). So how is this happening? Is there a vulnerability in some of the software on the VPS? Are the VPS owners peddling warez across our rented space? (can they do this?) I don't want to delete the warez in case it is seen as a hostile action against this outside force, and they choose to retaliate. What should I do? How do I troubleshoot this? Has this happened to anyone else before?

    Read the article

< Previous Page | 859 860 861 862 863 864 865 866 867 868 869 870  | Next Page >