Search Results

Search found 1399 results on 56 pages for 'subdomain fu'.

Page 51/56 | < Previous Page | 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • SQL Server 2008, Books Online, and old documentation...

    - by Chris J
    [I have no idea if stackoverflow really is right right place for this, but don't know how many devs on here run into msi issues with SQL Server; suggest SuperUser or ServerFault if folk think it's better on either of those] About a year ago, when we were looking at moving our codebase forward and migrating to SQL Server 2008, I pulled down a copy of Books Online from the MSDN. Reviewed, did background research, fed results upstream, grabbed Express and tinkered with that. Then we got the nod to move forward (hurrah!) this past couple of weeks. So armed with Developer Edition, and running through the install, I've since found out I've zapped the Books Online MSI, no-ones got a copy of it, and Microsoft only have a later version (Oct 2009) available, so damned if I can update my SQL Server fully and properly... {mutter grumble}. Does anyone know if old versions of Books Online are available for download anywhere? Poking around the Microsoft download centre can't find it, neither is my google-fu finding it. For reference, I'm looking for SQLServer2008_BOL_August2008_ENU.msi ... This may just be a case of good ol' manual delete the files and (try) and clean up the registry :-(

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Keoki Zee
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser?

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Cold Hawaiian
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser? Update I've been testing this for several days now, and I've confirmed that the 2nd method of excluding yourself from tracking does indeed work. The problem was that the filter settings weren't properly applied to my profile, which has been corrected. See the accepted answer below.

    Read the article

  • Avoiding Duplicate Content Penalties on a Corporate/Franchise website

    - by heath
    My question is really an extension of a previous question that was ported from stackoverflow and closed so I cannot edit it. The basic gist is a regional franchise company has decided to force all independent stores into one website look; they currently all have their own domains and completely different websites. After reading the helpful answers and looking over some links provided, I think my solution is to put a 301 on each franchise store site (acme-store1.com, acme-store2.com, etc) back to the main corporate site (acme.com). All of the company history, product info, etc (about 90% of the entire site) applies to all stores. However, each store should have some exclusive content such as staff, location pictures, exclusive events and promotions, etc. I originally thought that I would simply do something like acme.com/store1/staff, acme.com/store2/staff, etc for the store exclusive content and then acme.com/our-company, for example, would cover all stores. However, I now see two issues that I don't know how to solve. They want to see site stats based on what store site they came from. If a user comes from acme-store1.com, is redirected to acme.com and hits several pages, don't I need to somehow keep that original site in the new url to track each page in that user's session and show they originally came from acme-store1.com? Each store is still independently owned and is essentially still in competition with the other stores, albeit, in less competition than they are with other brands. This is important because each store would like THEIR contact info, links to their social media pages, their mailing list sign-up and customer requests on EVERY page. So if a user originally goes to acme-store1.com and is redirected to acme.com, it still should look to the user that it's all about store 1, even though 90% of the content will be exactly the same as it is in the store 2, store 3 and corporate site. For example, acme.com/our-company would have the same company history, same header/footer/navigation, BUT depending on the original site the user came from, it would display contact and links to THAT store. If someone came directly to the corporate site, it would display their contact and links (they have their own as well). I was considering that all redirects would be to store1.acme.com, store2.acme.com, etc (or acme.com/store1) and then I can dynamically add the contact info and appropriate links based on the subdomain or subfolder. But, then I have to worry about duplicate content penalties because, again, about 90% of the text in these "subdomains" are all the same. For reference, this is a PHP5 site. I've already written a compact framework utilizing templates and mod-rewrite that I've used for other sites. Is this an easy fix that I'm just not grasping? Any suggestions?

    Read the article

  • Sharing one static ip for both ftp and www service

    - by user11496
    Trying to figure out how to update the Zone record and configure webserver so that one application on the webserver is accessible by public. I'm completely not good at NS/DNS/NAT/firewall/routing/port forwarding/networking etc. "faraday" is the intranet name. Everyone within local network can access all applications hosted on "faraday". Hostname for webserver is "www", FTP server is "ftpserver". Both servers running RHEL4 OS. The goal is to allow anyone outside the company network (public) to access only one of the many applications on "faraday". Hope somebody can help me with some of the questions below, if not all. From zoneedit record, the static IP is used by FTP now. Can I use the same existing static IP - 219.95.10.100, for web service? Currently anyone who enter "http://www.abc.com.my" will be directed to "http://www.abc.com". I don't want this to change. Currently, no one else, except employee on local network, can access "faraday" web pages. How to configure so that when anyone type "http://thisapp.abc.com.my" on their web browser, the url will lead them to "http://faraday/thisapp" (application folder is /var/www/html/thisapp on RHEL4 web server). If possible, how to set the URL will continue to show "http://thisapp.abc.com.my" instead of "http://faraday/thisapp" How to limit/restrict user (those who are not from local network) so they only have access to "http://thisapp.abc.com.my", but not "http://faraday" or "http://faraday/anotherapp", etc. What's the configuration changes needed in /etc/httpd.conf on web server? Company domain name is "abc.com.my". Following is the zone records on www.zoneedit.com. Subdomain Type IP sdsl A 219.95.10.100 ftp CNAME sdsl.abc.com.my @ NS ns3.zoneedit.com @ NS ns7.zoneedit.com WebForward record: New Domain Destination Cloaked www.abc.com.my http://www.abc.com N On my local DNS server, there are 2 zone files: abc.com.my and pnmy.abc.com. > cat abc.com.my.zone ftp CNAME ftp.pnmy.abc.com. sdsl A 219.95.10.100 > cat pnmy.abc.com.zone ftp CNAME ftpserver ftpserver A 172.16.5.1 faraday CNAME www www A 172.16.5.2

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • Windows 2008 R2 CA and auto-enrollment: how to get rid of >100,000 issued certificates?

    - by HopelessN00b
    The basic problem I'm having is that I have 100,000 useless machine certificates cluttering up my CA, and I'd like to delete them, without deleting all certs, or time jumping the server ahead, and invalidating some of the useful certs on there. This came about as a result of accepting a couple defaults with our Enterprise Root CA (2008 R2) and using a GPO to auto-enroll client machines for certificates to allow 802.1x authentication to our corporate wireless network. Turns out that the default Computer (Machine) Certificate Template will happily allow machines to re-enroll instead of directing them to use the certificate they already have. This is creating a number of problems for the guy (me) who was hoping to use the Certificate Authority as more than a log of every time a workstation's been rebooted. (The scroll bar on the side is lying, if you drag it to the bottom, the screen pauses and loads the next few dozen certs.) Does anyone know how to DELETE 100,000 or so time-valid, existing certificates from a Windows Server 2008R2 CA? When I go to delete a certificate now, now, I get an error that it cannot be delete because it's still valid. So, ideally, some way to temporarily bypass that error, as Mark Henderson's provided a way to delete the certificates with a script once that hurdle is cleared. (Revoking them is not an option, as that just moves them to Revoked Certificates, which we need to be able to view, and they can't be deleted from the revoked "folder" either.) Update: I tried the site @MarkHenderson linked, which is promising, and offers much better certificate manageability, buts still doesn't quite get there. The rub in my case seems to be that the certificates are still "time-valid," (not yet expired) so the CA doesn't want to let them be deleted from existence, and this applies to revoked certs as well, so revoking them all and then deleting them won't work either. I've also found this technet blog with my Google-Fu, but unfortunately, they seemed to only have to delete a very large number of certificate requests, not actual certificates. Finally, for now, time jumping the CA forward so the certificates I want to get rid of expire, and therefore can be deleted with the tools at the site Mark linked is not a great option, as would expire a number of valid certificates we use that have to be manually issued. So it's a better option than rebuilding the CA, but not a great one.

    Read the article

  • Samba share not accessible from Win 7 - tried advice on superuser

    - by Roy Grubb
    I have an old Red Hat Linux box that I use, amongst other things, to run Samba. My Vista and remaining Win XP PC can access the p/w-protected Samba shares. I just set up a new Windows 7 64-bit Pro PC. Attempts to access the Samba shares by clicking on the Linux box's icon in 'Network' from this machine gave a Logon failure: unknown user name or bad password. message when I gave the correct credentials. So I followed the suggestions in Windows 7, connecting to Samba shares (also checked here but found LmCompatibilityLevel was already 1). This got me a little further. If click on the Linux box's icon in 'Network' from this machine I now see icons for the shared directories. But when I click on one of these, I get \\LX\share is not accessible. You might not have permission... etc. I tried making the Win 7 password the same as my Samba p/w (the user name was already the same). Same result. The Linux box does part of what I need for ecommerce - the in-house part, it's not accessible to the Internet. As my Linux Fu is weak, I have to avoid changes to the Linux box, so I'm hoping someone can tell me what to do to Win 7 to make it behave like XP and Vista when accessing this share. Help please!? Thanks Thanks for replying @Randolph. I had set 'Network security: LAN Manager authentication level' to Send LM & NTLM - use NTLMv2 session security if negotiated based on the advice in Windows 7, connecting to Samba shares and had restarted the machine, but that didn't work for me. I'll try playing with other Network security values. I have now tried the following: Network security: Allow Local System to use computer identity for NTLM: changed from Not Defined to "Enabled". Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. Network security: Restrict NTLM: Add remote server exceptions for NTLM Authentication (added LX) Restarted machine Still says "\LX\share is not accessible. You might not have permission..." etc. I can't see any other Network security settings that might affect this. Any other ideas please? Thanks Roy

    Read the article

  • Windows DHCP Server - get notification when a non-AD joined device gets an IP address

    - by TheCleaner
    SCENARIO To simplify this down to it's easiest example: I have a Windows 2008 R2 standard DC with the DHCP server role. It hands out IPs via various IPv4 scopes, no problem there. WHAT I'D LIKE I would like a way to create a notification/eventlog entry/similar whenever a device gets a DHCP address lease and that device IS NOT a domain joined computer in Active Directory. It doesn't matter to me whether it is custom Powershell, etc. Bottom line = I'd like a way to know when non-domain devices are on the network without using 802.1X at the moment. I know this won't account for static IP devices. I do have monitoring software that will scan the network and find devices, but it isn't quite this granular in detail. RESEARCH DONE/OPTIONS CONSIDERED I don't see any such possibilities with the built in logging. Yes, I'm aware of 802.1X and have the ability to implement it long-term at this location but we are some time away from a project like that, and while that would solve network authentication issues, this is still helpful to me outside of 802.1X goals. I've looked around for some script bits, etc. that might prove useful but the things I'm finding lead me to believe that my google-fu is failing me at the moment. I believe the below logic is sound (assuming there isn't some existing solution): Device receives DHCP address Event log entry is recorded (event ID 10 in the DHCP audit log should work (since a new lease is what I'd be most interested in, not renewals): http://technet.microsoft.com/en-us/library/dd759178.aspx) At this point a script of some kind would probably have to take over for the remaining "STEPS" below. Somehow query this DHCP log for these event ID 10's (I would love push, but I'm guessing pull is the only recourse here) Parse the query for the name of the device being assigned the new lease Query AD for the device's name IF not found in AD, send a notification email If anyone has any ideas on how to properly do this, I'd really appreciate it. I'm not looking for a "gimme the codez" but would love to know if there are alternatives to the above list or if I'm not thinking clear and another method exists for gathering this information. If you have code snippets/PS commands you'd like to share to help accomplish this, all the better.

    Read the article

  • How to install GIT on an offline RHEL?

    - by Stijn Vanpoucke
    I'm using the following commands from the manual to install GIT $ tar -zxf git-1.7.2.2.tar.gz $ cd git-1.7.2.2 $ make prefix=/usr/local all $ sudo make prefix=/usr/local install but I'm receiving the following exceptions ... cache.h: At top level: cache.h:746: error: expected declaration specifiers or â...â before âtime_tâ cache.h:889: warning: âstruct timevalâ declared inside parameter list cache.h:895: warning: âstruct timevalâ declared inside parameter list cache.h:970: error: expected specifier-qualifier-list before âoff_tâ cache.h:979: error: expected specifier-qualifier-list before âoff_tâ cache.h:997: error: expected specifier-qualifier-list before âoff_tâ cache.h:1057: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1063: error: expected declaration specifiers or â...â before âuint32_tâ cache.h:1064: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before ânt h_packed_object_offsetâ cache.h:1065: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âfi nd_pack_entry_oneâ cache.h:1067: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1069: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1070: error: expected declaration specifiers or â...â before âoff_tâ cache.h:1094: error: expected specifier-qualifier-list before âoff_tâ cache.h:1168: error: expected â)â before â*â token cache.h:1177: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âre ad_in_fullâ cache.h:1178: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_in_fullâ cache.h:1179: error: expected â=â, â,â, â;â, âasmâ or â__attribute__â before âwr ite_str_in_fullâ cache.h:1252: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:2: credential.h:28: error: expected declaration specifiers or â...â before âFILEâ credential.h:29: error: expected declaration specifiers or â...â before âFILEâ In file included from credential-store.c:4: parse-options.h:115: error: expected specifier-qualifier-list before âintptr_tâ credential-store.c: In function âparse_credential_fileâ: credential-store.c:13: error: âFILEâ undeclared (first use in this function) credential-store.c:13: error: âfhâ undeclared (first use in this function) credential-store.c:17: warning: implicit declaration of function âfopenâ credential-store.c:19: error: âerrnoâ undeclared (first use in this function) credential-store.c:19: error: âENOENTâ undeclared (first use in this function) credential-store.c:24: error: too many arguments to function âstrbuf_getlineâ credential-store.c:24: error: âEOFâ undeclared (first use in this function) credential-store.c:39: warning: implicit declaration of function âfcloseâ credential-store.c: In function âprint_entryâ: credential-store.c:44: warning: implicit declaration of function âprintfâ credential-store.c:44: warning: incompatible implicit declaration of built-in fu nction âprintfâ credential-store.c: In function âmainâ: credential-store.c:132: warning: implicit declaration of function âumaskâ credential-store.c:144: error: âstdinâ undeclared (first use in this function) credential-store.c:144: error: too many arguments to function âcredential_readâ credential-store.c:147: warning: implicit declaration of function âstrcmpâ Is this because I didn't install the dependencies? apt-get install libcurl4-gnutls-dev libexpat1-dev gettext libz-dev libssl-dev How do I install them offline?

    Read the article

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • IIS6: Web Site presenting the wrong SSL certificate

    - by pcampbell
    Consider an IIS6 installation with multiple Web Sites. Each is intended to be a different subdomain with its own cert (not a wildcard cert). Each has their host-header specified properly. foo.example.com - port 443. Require SSL w/128 bit. Working properly! It presents its SSL cert properly to the browser. Configured for a specific IP address. bar.example.com - port 443. Require SSL w/128 bit. Configured for all unassigned addresses. When inspecting the IIS property page, it fully shows the cert for bar.example.com on the View Certificate button. This is a NEW web site that is having cert problems. It's presenting the cert for foo.example.com. Ouch! Question: can you have more than one subdomains both running on separate websites with SSL certs on the same port (443)? How would you configure 2 web sites on the same range of 'all unassigned' for the same port (443) ? Update: ignoring the cert error, when browsing to https://bar, the content served is from https://foo site. When NOT using SSL, browsing to http://bar serves the correct content from bar. Just one address is assigned to this DMZ server.

    Read the article

  • Nginx: Rewrite rule for subfolder

    - by gryzzly
    Hello, I have a subdomain where I want to keep projects I am working on, in order to show these projects to clients. Here is the configuraion file from /etc/nginx/sites-available/projects: server { listen 80; server_name projects.example.com; access_log /var/log/nginx/projects.example.com.access.log; error_log /var/log/nginx/projects.example.com.error.log; location / { root /var/www/projects; index index.html index.htm index.php; } location /example2.com { root /var/www/projects/example2.com; auth_basic "Stealth mode"; auth_basic_user_file /var/www/projects/example2.com/htpasswd; } location /example3.com/ { index index.php; if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/example3\.com/(.*)$ /example3\.com/index.php?id=$1 last; break; } } location ~ \.php { root /var/www/mprojects; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } I want to be able to place different php engines (wordpress, getsimple etc.) in subfolders. These engines have different querry parameters (id, q, url etc.) so in order to make preety URLs work I have to make a rewrite. However, above doesn't work. This is the response I get: Warning: Unknown: Filename cannot be empty in Unknown on line 0 Fatal error: Unknown: Failed opening required '' (include_path='.:/usr/local/lib/php') in Unknown on line 0 If I take out "location /example3.com/" rule, then everything works but with no preety URLs. Please help. The configuration is based on this post: http://stackoverflow.com/questions/2119736/cakephp-in-a-subdirectory-using-nginx-rewrite-rules I am using Ubuntu 9.10 and nginx/0.7.62 with php-fpm.

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI.

    - by Arda Xi
    I have a hosting account with DreamHost, and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI, I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • How to manage sub-domains on WinHost with IIS7 URL Rewrite 2.0?

    - by jrummell
    I'm trying out WinHost and I'm running into some issues with sub-domains. On WinHost, you can have multiple sub-domains per hosting account, but each sub-domain points to the root website. E.g. you can have www.example.com, sub1.example.com, and sub2.example.com but all of them display the content at http://www.example.com/. Other Hosts allow you to point sub-domains to a sub folder in your website. This would allow you to point sub1.example.com to /sub1, sub2.example.com to /sub2 and www.example.com to /. WinHost recommends using an asp/aspx page to redirect http://sub1.example.com to http://sub1.example.com/sub1, which points to /sub1. While that would work, I'd like to not have the subdomain in the url twice. So I tried using IIS7 URL Rewrite to point http://sub1.example.com to /sub1. Ben Powell describes this in detail on his blog. This is great, except Request.ApplicationPath is now /sub1/path/to/current/page.aspx, which breaks ASP.Net Themes (and probably other stuff too). What can I do to fix the ApplicationPath? Is there a better way to accomplish this?

    Read the article

  • Redirect particular hostname from https to httpd in httpd/apache2

    - by webnothing
    I have a webserver that has an ssl certificate applied to a subdomain https://shop.mydomain.com. I also have the hostname http://mydomain.com that has no ssl certificate. When invoking https://mydomain.com, browsers issue a warning that a certificate could not be verified because the webserver is identifying itself as https://shop.mydomain.com. I would like all traffic that hits https://mydomain.com to be redirected to http://mydomain.com, and leave https://shop.mydomain.com as is. My httpd.conf file generally looks like this: < VirtualHost 122.11.11.21:80 > ServerName shop.mydomain.com .. regular old port 80 .. < /VirtualHost > < VirtualHost 122.11.11.21:443 > ServerName shop.mydomain.com .. SSL applies here .. < /VirtualHost > < VirtualHost 122.11.11.21:80 > ServerName mydomain.com .. regular old port 80 .. < /VirtualHost > It does not look as if I have SSL set up for https://mydomain.com yet one can invoke SSL mode and the browser identifies the connection as https://shop.mydomain.com. I need to redirect from https://mydomain.com because for some reason, Google has indexed my website with this url even though it shows a warning. I have tried various methods to get this to redirect and nothing has worked. Any help would be greatly appreciated.

    Read the article

  • How to Apache SSL proxy to openerp 7 running in VM?

    - by Johnbritto
    I have installed openerp v7 in an ubuntu 12.04 Virtual machine from launchpad.i.e server, web, addons. I configured SSL reverse proxy on virtual machine and my configuration for virtual host *:443 are ServerName openerp.mydomain.net ServerAdmin openerp@localhost SSLEngine on SSLCertificateFile /etc/ssl/openerp/server.crt SSLCertificateKeyFile /etc/ssl/openerp/server.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://172.16.150.14:8069/ ProxyPassReverse / http://172.16.150.14:8069/ RequestHeader set "X-Forwarded-Proto" "https" # Fix IE problem (httpapache proxy dav error 408/409) SetEnv proxy-nokeepalive 1 </VirtualHost> on host, I have configured apache reverse proxy for my subdomain in vhost_ssl.conf as SSLEngine On SSLProxyEngine On ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://172.16.150.14/ ProxyPassReverse / https://172.16.150.14/ SetEnv proxy-nokeepalive 1 <Location /> Order allow,deny Allow from all </Location> I have set 172.16.150.14 on netrpc and xmlrcs interfaces in openerp-server.conf. Now, when I access https:// openerp.mydomain.net from Girefox and chrome browser..I get http:// openerp.mydomain.net%2C%20openerp.mydomain.net/?db=testingdb which makes 404. But when i access URL from IE 9, the URL https:// openerp.mydomain.net works ok .. secondly if i change the parameter list_db= false, then the links works as expected.. Kindly let me know what is creating bottleneck with URL redirect to http://openerp.mydomain.net, openerp.myydomain.net/?db=testdb on Firefox and chrome. i am struck here doing troubleshooting with the URL to work.

    Read the article

  • Trying to run an ASP.NET MVC application using Mono on Apache with FastCGI

    - by Arda Xi
    I have a hosting account with DreamHost and I would like to use the same account to run ASP.NET applications. I have an application deployed in a subdomain, a .htaccess with a handler like this: # Define the FastCGI Mono launcher as an Apache handler and let # it manage this web-application (its files and subdirectories) SetHandler monoWrapper Action monoWrapper /home/arienh4/<domain>/cgi-bin/mono.fcgi virtual My mono.fcgi is set up as such: #!/bin/sh #umask 0077 exec >>/home/arienh4/tmp/mono-fcgi.log exec 2>>/home/arienh4/tmp/mono-fcgi.err echo $(date +"[%F %T]") Starting fastcgi-mono-server2 cd / chmod 0700 /home/arienh4/tmp/mono-fcgi.sock echo $$>/home/arienh4/tmp/mono-fcgi.pid # stdin is the socket handle export PATH="/home/arienh4/mono/bin:$PATH" export LD_LIBRARY_PATH="/home/arienh4/mono/lib:$LD_LIBRARY_PATH" export TMP="/home/arienh4/tmp" export MONO_SHARED_DIR="/home/arienh4/tmp" exec /home/arienh4/mono/bin/mono /home/arienh4/mono/lib/mono/2.0/fastcgi-mono-server2.exe \ /logfile=/home/arienh4/logs/fastcgi-mono-web.log /loglevels=All \ /applications=/:/home/arienh4/<domain> I took this from the Mono site for CGI. I'm not sure if I'm doing it correctly though. This code is resulting in this error: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace. I have no idea what's causing this. As far as I can see, Mono isn't even hit (no log files are created).

    Read the article

  • Need help in setting lighttpd on Ubuntu 9.10

    - by hap497
    Hi, I am trying to run lighttpd on Ubuntu 9.10. I get the conf file from the doc directory of lighttpd source. $ sudo ./lighttpd -f lighttpd.conf $ ps -ef | grep lighttpd root 2094 1 0 19:40 ? 00:00:00 ./lighttpd -f lighttpd.conf This is my lighttpd.conf: $ more lighttpd.conf # lighttpd configuration file # # use it as a base for lighttpd 1.0.0 and above # # $Id: lighttpd.conf,v 1.7 2004/11/03 22:26:05 weigon Exp $ ############ Options you really have to take care of #################### ## modules to load # at least mod_access and mod_accesslog should be loaded # all other module should only be loaded if really neccesary # - saves some time # - saves memory server.modules = ( # "mod_rewrite", # "mod_redirect", # "mod_alias", "mod_access", # "mod_trigger_b4_dl", # "mod_auth", # "mod_status", # "mod_setenv", # "mod_fastcgi", # "mod_proxy", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) ## A static document-root. For virtual hosting take a look at the ## mod_simple_vhost module. server.document-root = "/srv/www/htdocs/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" # files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm" ) ## set the event-handler (read the performance section in the manual) # server.event-handler = "freebsd-kqueue" # needed on OS X # mimetype mapping mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "application/ogg", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jar" => "application/x-java-archive", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".cpp" => "text/plain", ".log" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar", # default mime type "" => "application/octet-stream", ) # Use the "Content-Type" extended attribute to obtain mime type if possible #mimetype.use-xattr = "enable" ## send a different Server: header ## be nice and keep it at lighttpd # server.tag = "lighttpd" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ######### Options that are good to be but not neccesary to be changed ####### ## bind to port (default: 80) #server.port = 81 ## bind to localhost (default: all interfaces) #server.bind = "127.0.0.1" ## error-handler for status 404 #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## to help the rc.scripts #server.pid-file = "/var/run/lighttpd.pid" ###### virtual hosts ## ## If you want name-based virtual hosting add the next three settings and load ## mod_simple_vhost ## ## document-root = ## virtual-server-root + virtual-server-default-host + virtual-server-docroot ## or ## virtual-server-root + http-host + virtual-server-docroot ## #simple-vhost.server-root = "/srv/www/vhosts/" #simple-vhost.default-host = "www.example.org" #simple-vhost.document-root = "/htdocs/" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' #server.errorfile-prefix = "/usr/share/lighttpd/errors/status-" #server.errorfile-prefix = "/srv/www/errors/status-" ## virtual directory listings #dir-listing.activate = "enable" ## select encoding for directory listings #dir-listing.encoding = "utf-8" ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" #debug.log-file-not-found = "enable" ### only root can use these options # # chroot() to directory (default: no chroot() ) #server.chroot = "/" ## change uid to <uid> (default: don't care) #server.username = "wwwrun" ## change uid to <uid> (default: don't care) #server.groupname = "wwwrun" #### compress module #compress.cache-dir = "/var/cache/lighttpd/compress/" #compress.filetype = ("text/plain", "text/html") #### proxy module ## read proxy.txt for more info #proxy.server = ( ".php" => # ( "localhost" => # ( # "host" => "192.168.0.101", # "port" => 80 # ) # ) # ) #### fastcgi module ## read fastcgi.txt for more info ## for PHP don't forget to set cgi.fix_pathinfo = 1 in the php.ini #fastcgi.server = ( ".php" => # ( "localhost" => # ( # "socket" => "/var/run/lighttpd/php-fastcgi.s ocket", # "bin-path" => "/usr/local/bin/php-cgi" # ) # ) # ) #### CGI module #cgi.assign = ( ".pl" => "/usr/bin/perl", # ".cgi" => "/usr/bin/perl" ) # #### SSL engine #ssl.engine = "enable" #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" #### status module #status.status-url = "/server-status" #status.config-url = "/server-config" #### auth module ## read authentication.txt for more info #auth.backend = "plain" #auth.backend.plain.userfile = "lighttpd.user" #auth.backend.plain.groupfile = "lighttpd.group" #auth.backend.ldap.hostname = "localhost" #auth.backend.ldap.base-dn = "dc=my-domain,dc=com" #auth.backend.ldap.filter = "(uid=$)" #auth.require = ( "/server-status" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "user=jan" # ), # "/server-config" => # ( # "method" => "digest", # "realm" => "download archiv", # "require" => "valid-user" # ) # ) #### url handling modules (rewrite, redirect, access) #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.123.org/$1" ) #### both rewrite/redirect support back reference to regex conditional using %n #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} # # define a pattern for the host url finding # %% => % sign # %0 => domain name + tld # %1 => tld # %2 => domain name without tld # %3 => subdomain 1 name # %4 => subdomain 2 name # #evhost.path-pattern = "/srv/www/vhosts/%3/htdocs/" #### expire module #expire.url = ( "/buggy/" => "access 2 hours", "/asdhas/" => "ac cess plus 1 seconds 2 minutes") #### ssi #ssi.extension = ( ".shtml" ) #### rrdtool #rrdtool.binary = "/usr/bin/rrdtool" #rrdtool.db-name = "/var/lib/lighttpd/lighttpd.rrd" #### setenv #setenv.add-request-header = ( "TRAV_ENV" => "mysql://user@host/db" ) #setenv.add-response-header = ( "X-Secret-Message" => "42" ) ## for mod_trigger_b4_dl # trigger-before-download.gdbm-filename = "/var/lib/lighttpd/trigger.db" # trigger-before-download.memcache-hosts = ( "127.0.0.1:11211" ) # trigger-before-download.trigger-url = "^/trigger/" # trigger-before-download.download-url = "^/download/" # trigger-before-download.deny-url = "http://127.0.0.1/index.html" # trigger-before-download.trigger-timeout = 10 #### variable usage: ## variable name without "." is auto prefixed by "var." and becomes "var.bar" #bar = 1 #var.mystring = "foo" ## integer add #bar += 1 ## string concat, with integer cast as string, result: "www.foo1.com" #server.name = "www." + mystring + var.bar + ".com" ## array merge #index-file.names = (foo + ".php") + index-file.names #index-file.names += (foo + ".php") #### include #include /etc/lighttpd/lighttpd-inc.conf ## same as above if you run: "lighttpd -f /etc/lighttpd/lighttpd.conf" #include "lighttpd-inc.conf" #### include_shell #include_shell "echo var.a=1" ## the above is same as: #var.a=1 When I go to browser and hit 'http://127.0.0.1', I get link not found. Any idea?

    Read the article

  • How to setup a virtual host in Ubuntu?

    - by Rade
    I have an app that's accessible via 1.2.3.4/myapp. The app is installed in /var/www/myapp. I've set up a subdomain(apps.mydomain.com) that points to 1.2.3.4. I want the server to point to var/www/myapp if I type apps.mydomain.com/myapp, how do I do that? I have experience creating virtual hosts(lots of them) locally but I'm lost because it's now in production and it's a little different. Here's my virtual host config: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName apps.mydomain.com/myapp DocumentRoot /var/www/myapp/public <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Any idea why I still see the files instead of pointing me to the document root? Just in case someone might ask, the app is based on Laravel 4 framework. It's really bad right now because anyone can access the files from the browser.

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

  • Changing MX records in named zone file

    - by Paul England
    I forgot how all this works. I have a GoDaddy account, using my own DNS and whatnot. I'm having trouble getting my email to work. They said I need to update my MX records. basically, I have the following. 184.168.30.42 is the domain's IP address, obviously. gamengai.com. 14400 IN NS n1 gamengai.com. 14400 IN NS n2 ns1 14400 IN A 184.168.30.42 ns2 14400 IN A 184.168.30.42 gamengai.com. 14400 IN A 184.168.30.42 localhost 14400 IN A 127.0.0.1 ftp 14400 IN A 184.168.30.42 www 14400 IN A 184.168.30.42 mail 14400 IN A 184.168.30.42 subdomain 14400 IN A 184.168.30.42 gamengai.com 14400 IN MX 10 mail Mail doesn't work though... they say to make the following change: 0 smtp.secureserver.net 10 mailstore1.secureserver.net So should the last line point to mailstore1.secureserver.net instead of mail in the last field? What about the other line? I had this working at one time, but it's totally gotten away from me. It's a virtual dedicated server and their support for this stuff is pretty bad... almost as bad as my admin skills since I went the programmer route.

    Read the article

  • haproxy: Is there a way to group acls for greater efficiency?

    - by user41356
    I have some logic in a frontend that routes to different backends based on both the host and the url. Logically it looks like this: if hdr(host) ends with 'a.domain.com': if url starts with '/dir1/': use backend domain.com/dir1/ elif url starts with '/dir2/': use backend domain.com/dir2/ # ... else if ladder repeats on different dirs elif hdr(host) ends with 'b.domain.com': # another else if ladder exactly the same as above # ... # ... else if ladder repeats like this on different domains Is there a way to group acls to avoid having to repeatedly check the domain acl? Obviously there needs to be a use backend statement for each possibility, but I don't want to have to check the domain over and over because it's very inefficient. In other words, I want to avoid this: use backend domain.com/url1/ if acl-domain.com and acl-url1 use backend domain.com/url2/ if acl-domain.com and acl-url2 use backend domain.com/url3/ if acl-domain.com and acl-url3 # tons more possibilities below because it has to keep checking acl-domain.com. This is particularly an issue because I have specific rules for subdomains such as a.domain.com and b.domain.com, but I want to fall back on the most common case of *.domain.com. That means every single rule that uses a specific subdomain must be checked prior to *.domain.com which makes it even more inefficient for the common case.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56  | Next Page >