Search Results

Search found 21060 results on 843 pages for 'content disposition'.

Page 350/843 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • puppet master --compile logs errors to stdout

    - by danny
    I see a bug about this that was accepted and then closed a year ago: http://projects.puppetlabs.com/issues/3670 but I'm using puppet 2.7.14 and am getting the same issue. I'm trying to use "puppet solo" (i.e. just running puppet apply on each server to be configured) as I only have 2 or 3 servers in this project and adding another server as a puppetmaster would be completely overkill. Unless I'm mistaken, the best way to apply a node manually to a server is to do: puppet master --compile=mynode > catalog.json puppet apply --catalog catalog.json But the puppet master command outputs a couple of warnings and notices to stdout, mixed in with the desired json content. And it uses colored output so I can't just pipe it through egrep -v '^warning:' EDIT: I guess it's not too big of a deal to use grep - since puppet 2.7 pretty-prints the actual content and the warnings don't ever start with spaces, piping the output through egrep '^( |{|})' works So my questions are basically: Is there a better way than this to apply a puppet node without using a puppetmaster? I can't really find any good references online to using puppet without a puppetmaster, even though that seems like a perfectly reasonable thing to do for a small project. Is there a setting or flag that I'm missing that will get puppet master to stop being an asshole and send its errors to stderr instead of stdout? Or do I really have to turn off color logging, then grep to exclude warning: and notice: lines?

    Read the article

  • Problem configuring Apache/Wordpress on subdomain

    - by friism
    I have two servers (one LAMP, one Windows) and one website with an associated blog. I'm running the main site on the Windows server, and the blog on the LAMP server, using Wordpress. The main site is accessed at http://folketsting.dk (it's in Danish -- sorry), the blog is accessed at http://blog.folketsting.dk (this link is bad, read on). The main site works fine. The blog works, except for the frontpage. Example of working post: http://blog.folketsting.dk/2009/10/09/ftlive/. The frontpage of the blog (http://blog.folketsting.dk) shows html from http://folketsting.dk however (except for the css and javascript). In fact, any other URL than the frontpage "works", and gets served by Wordpress e.g. http://blog.folketsting.dk/foo. I cannot -- for the life of me -- understand how the LAMP server running http://blog.folketsting.dk manages to serve up content generated by the Windows server running http://folketsting.dk. Looking at the response headers at http://blog.folketsting.dk, it's evident that the content originates from Apache, not IIS. I'm pretty sure it's not a DNS-issue, since the problem is evident even when accessing the raw IP, eg. http://130.226.142.141/ vs. http://130.226.142.141/foo. I'm thinking it's a bad config in Apache... any clues?

    Read the article

  • How to test email spam scores with amavis?

    - by CaptSaltyJack
    I'd like a way to test a spam message to see its spam scores that SpamAssassin gives it. The SA db files (bayes_toks, etc) reside in /var/lib/amavis/.spamassassin. I've been testing emails by doing this: sudo su amavis -c 'spamassassin -t msgfile' Though this yields some strange results, such as: Content analysis details: (3.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100% [score: 1.0000] -0.0 NO_RELAYS Informational: message was not relayed via SMTP 0.0 LONG_TERM_PRICE BODY: LONG_TERM_PRICE 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100% [score: 1.0000] -0.0 NO_RECEIVED Informational: message has no Received headers 0.2 is an awfully low scores for BAYES_999! But this is the first time I've used amavis, previously I've always just used spamassassin directly as a content filter in postfix, but apparently running amavis/spamassassin is more efficient. So, with amavis in the picture, how can I run a test on a message to see its spam score breakdown? Another email I ran a test on got this result: 2.0 BAYES_80 BODY: Bayes spam probability is 80 to 95% [score: 0.8487] Doesn't make sense, that BAYES_80 can yield a higher score than BAYES_999. Help!

    Read the article

  • Lighttpd - byte range request doesn't work. can't stream mp4

    - by w-01
    Am attempting to use the lastest flowplayer. (if it could work it would be pretty awesome btw) http://flowplayer.org One of the cool things about it is it uses the new HTML5 video element and supports random seeking/playback. In order to do this, you need a byte range request capable server on the backend. Luckily I'm using Lighttpd 1.5.0 on the backend. Unfortunately the current behavior is that when I do a random seek, the video simply restarts itself from the beginning. the docs say: "For HTML5 video you don't have to do any client side configuration. If your server supports byte range requests then seeking should work on the fly. Most servers including Apache, Nginx and Lighttpd support this." On my page, using chrome web developer tools, i can see when the video is requested, the server response headers indicate it is able to acce[t byte ranges. Accept-Ranges:bytes when I do random seek in the player, I can see that that byte ranges are request appropriately in the request header: Range: bytes=5668-10785 I can also verify the moov atom is at the front of the video file. My question here is if there is something else on the lighttpd side i'm missing in order to enable byte-range requests? The reason i ask is because the current behavior suggests that the lighttpd simply doesn't understand the byte range request and is just reserving the video from the beginning. Update it's clearer to put this here. As per RJS' suggestion I ran a curl command. in the response it looks like lighttpd is working as expected. Content-Range: bytes 1602355-18844965/18844966 Content-Length: 17242611

    Read the article

  • Windows 7 DHCP Default Gateway not Overridden by manual Default Gateway

    - by dgwilson
    We have recently installed Windows 7 for student computers. All student computers must be routed through our content filter which is located at 192.168.0.63. This was done in WinXP by adding a Default Gateway in the network adapter settings TCP/IP Properties Advanced Default Gateway. All teacher computers are routed through the DHCP assigned Default Gateway of 192.168.0.1. In WinXP the dhcp default gateway was correctly overridden by this manual setting. In Win7 it appears that the dhcp default gateway is retained and the manual one is added to the list so that there are two with the dhcp one having the primary metric. I have tried several ways to remove the dhcp default gateway such as, running the "route delete 0.0.0.0 192.168.0.1" command. Doing this from an administrator command prompt works but it just resets upon reboot. I've tried adding this command to the registry's Run section but it seems to run as a non-administrator and therefore will not complete successfully. Is there any way to prevent this and force the manual default gateway to override the dhcp one? Or to remove the dhcp assigned one automatically on boot/login? HELP! We CANNOT allow student computers to connect to the internet without going through the content filter.

    Read the article

  • Windows 7 DHCP Default Gateway not Overridden by manual Default Gateway

    - by dgwilson
    We have recently installed Windows 7 for student computers. All student computers must be routed through our content filter which is located at 192.168.0.63. This was done in WinXP by adding a Default Gateway in the network adapter settings TCP/IP Properties Advanced Default Gateway. All teacher computers are routed through the DHCP assigned Default Gateway of 192.168.0.1. In WinXP the dhcp default gateway was correctly overridden by this manual setting. In Win7 it appears that the dhcp default gateway is retained and the manual one is added to the list so that there are two with the dhcp one having the primary metric. I have tried several ways to remove the dhcp default gateway such as, running the "route delete 0.0.0.0 192.168.0.1" command. Doing this from an administrator command prompt works but it just resets upon reboot. I've tried adding this command to the registry's Run section but it seems to run as a non-administrator and therefore will not complete successfully. Is there any way to prevent this and force the manual default gateway to override the dhcp one? Or to remove the dhcp assigned one automatically on boot/login? HELP! We CANNOT allow student computers to connect to the internet without going through the content filter.

    Read the article

  • Google Chrome - Issues with download dialogs using BinaryWrite

    - by Mila
    Hello, I have an empty ASP .NET page with the code behind to support downloading of PDF files. The page is called from a link-like web control that has NavigateUrl set to this page. In short, I am using the following for streaming: Response.Buffer = false; Response.ClearHeaders(); Response.ContentType = "application/x-pdf"; Response.AddHeader("Content-Disposition","attachment; filename="MyPDFFile.pdf"); byte[] binary = (dataReaderRemote[DataPDFFieldName]) as byte[]; //dataReaderRemote[DataPDFFieldName] has previously retrieved data if (binary != null) { MemoryStream memoryStream = new MemoryStream(binary); int sizeToWrite = CHUNKSIZE; //CHUNKSIZE=1024 for (int i = 0; i < binary.GetUpperBound(0) - 1; i = i + CHUNKSIZE) { if (!Response.IsClientConnected) return; if (i + CHUNKSIZE >= binary.Length) sizeToWrite = binary.Length - i; byte[] chunk = new byte[sizeToWrite]; memoryStream.Read(chunk, 0, sizeToWrite); Response.BinaryWrite(chunk); Response.Flush(); } } Response.Close(); IE as well as Firefox bring the download prompt window asking you whether you wish to open or save the file, while the user remains on the same page containing the link. However, Google Chrome opens a new blank tab and downloads the file automatically. Is there any way to prevent Chrome from opening the extra blank and therefore useless tab? I am using the Google Chrome version 5.0.375.55 (Official Build 47796) on Windows XP. Thanks in advance! Mila

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • Hiding a Website from Search Engine Bots and Viewers by Disabling Default VirtualHost

    - by Basel Shishani
    When staging a website on a remote VPS, we would like it to be accessible to team members only, and we would also like to keep the search engine bots off until the site is finalized. Access control by host whether in Iptables or Apache is not desirable, as accessing hosts can vary. After some reading in Apache config and other SF postings, I settled on the following design that relies on restricting access to only through specific domain names: Default virtual host would be disabled in Apache config as follows - relying on Apache behavior to use first virtual host for site default: <VirtualHost *:80> # Anything matching this should be silently ignored. </VirtualHost> <VirtualHost *:80> ServerName secretsiteone.com DocumentRoot /var/www/secretsiteone.com </VirtualHost> <VirtualHost *:80> ServerName secretsitetwo.com ... </VirtualHost> Then each team member can add the domain names in their local /etc/hosts: xx.xx.xx.xx secrethostone.com My question is: is the above technique good enough to achieve the above said goals esp restricting SE bots, or is it possible that bots would work around that. Note: I understand that mod_rewrite rules con be used to achieve a similar effect as discussed here: How to disable default VirtualHost in apache2?, so the same question would apply to that technique too. Also please note: the content is not highly secretive - the idea is not to devise something that is hack proof, so we are not concerned about traffic interception or the like. The idea is to keep competitors and casual surfers from viewing the content before it's released, and to prevent SE bots from indexing it.

    Read the article

  • Wordpress Directory Permission to allow uploads, plugin folders, etc

    - by user1015958
    I have a wordpress pre-made site which were developed on my localmachine, and i uploaded it too a vps running on debian6, using nginx, mysql, php. Following this guide: 1) Create an unprivilaged user, this could be say 'karl' or whatever, and make them belong to the www-data group. So that if I were to login as karl and create a web root in say /home/karl/www/ , all the files will be owned by karl:www-data 2) Set up nginx as the user www-data in nginx.conf 3) Set up PHP-FPM to run as www-data 4) Place your files in /home/karl/www/[domain name maybe]/public_html/, upload as 'karl' so you don't have to chown everything again. when i type ls -l inside public_html/ it shows that all the files inside are owned by karl:karl. But the public_html directory is owned by karl:www-data. I chmod 0755 the folder wp-content but i still get the error: ERROR: Path ../wp-content/connection_images does not seem to be writeable. I know i shouldn't set it too 777 due to security reason, how should i set it too proper permission? and what should i set also to allow my users to upload,write posts,edit articles? Sorry for my english by the way.

    Read the article

  • moving plone or recovering data

    - by Atilla Filiz
    I had a simple plone site running on my Ubuntu 8.10 box, setup by following instructions on https://help.ubuntu.com/community/forum/server/Plone Now the computer got an Ubuntu 9.04 install from scratch, and I want to get the site up and running. As far as I understand several python libraries from 9,04 are not compatible with plone. I tried installing plon3-site from ubuntu repos, and I got this: $bin/instance start Traceback (most recent call last): File "bin/instance", line 103, in ? import plone.recipe.zope2instance.ctl File "/media/Robocup2009/robocup2009/eggs/plone.recipe.zope2instance-2.7-py2.4.egg/plone/recipe/zope2instance/__init__.py", line 16, in ? import zc.buildout File "/media/Robocup2009/robocup2009/eggs/zc.recipe.egg-1.1.0-py2.4.egg/zc/__init__.py", line 1, in ? __import__('pkg_resources').declare_namespace(__name__) ImportError: No module named pkg_resources I have python-pkg-resources installed. I actually don't care about plone, I just found it easy to use. My site is just photo and video archive of several users, with low traffic but several gigabytes of data. What would you suggest? Is it easier to get plone working on my box and start my instance(how?), or recover the content from the instance folder(how?) and setup another content management system(which?)

    Read the article

  • Is something infecting my Google searches?

    - by hippietrail
    I starting doing some experimentation toward making a browser userscript for Google searches and when opening the JavaScript console noticed something that strikes me as very fishy: The page at https://www.google.com.au/search?oq=XYZ&sourceid=chrome&ie=UTF-8&q=XYZ displayed insecure content from http://50.116.62.47/js/chromeServerV45.js. The page at about:blank displayed insecure content from http://96.126.107.154/amz/google.php?callback=a&q=XYZ&country=US. (XYZ is a placeholder for whatever the search terms really was.) Is it likely that I've picked something like a drive-by browser infection? I've tried all kinds of searches for these URLs and other keywords but I've had no luck finding anything conclusive about whether they're malicious or what they are: 50.116.62.47 chromeServerV45.js 96.126.107.154 amz/google.php The only extensions I have installed are either widely used or written by myself. But something else is strange and I'm not sure if it's just a coincidence. I updated my Windows Chrome browser today to version 23.0.1271.64 m and now my Extensions tab as well as my settings tab are blank, so I can't try disabling my extesions. Here's some discussion I've been able to find so far but not really understand and make sense of: for 96.126.107.154 : "anomalous-javascript-pt2"

    Read the article

  • Burn bootable iso image to USB stick using dd: Won't boot (despite USB first in boot sequence)

    - by Nicolas Raoul
    I have installed Ubuntu on a Lenovo Thinkpad R500 2732, and I must update the BIOS. On the Lenovo website, I am offered this: BIOS Update Bootable CD for Windows 7 (32-bit, 64-bit), Vista (32-bit, 64-bit), XP - ThinkPad R500 I guess a bootable CD that would do a BIOS update is indeed what I need. (still wondering why it says "Windows" though... if it is bootable should not it be OS-agnostic?) Not wanting to waste a CD, I copied the image to my USB stick: sudo dd if=/home/nico/7yuj40uc.iso of=/dev/sdb1 bs=1M And rebooted, after making sure USB is first in the boot sequence. PROBLEM: It does not boot. Did I forget one step? Details about the iso image (readme): ls -lh 7yuj40uc.iso 25M file 7yuj40uc.iso /home/nico/7yuj40uc.iso: # ISO 9660 CD-ROM filesystem data '7YUJ40US ' (bootable) (Scroll to the right: it says "bootable") UNetbootin does not work because it is not a Linux image. Some people on the Internet advise to copy the content of the ISO and do other steps. This ISO has zero ISO content so it would not work. If I mount the ISO, I can see it contains zero files.

    Read the article

  • Using AddEncoding x-gzip .gz without actual files

    - by STATUS_ACCESS_DENIED
    With Apache (2.2 and later) how can I achieve the following. I want to transparently compress using GZip encoding (not plain Deflate) the output when a certain file is queried with its name plus the extension .gz, where the .gz version doesn't physically exist on disk. So let's say I have a file named /path/foo.bar and no file foo.bar.gz in the folder to which the URI /path maps, how can I get Apache to serve the contents of /path/foo.bar but with AddEncoding x-gzip ... applied to the (non-existing) file? The rewrite part appears to be easy, but the problem is how to apply the encoding to a non-existent item. The other way around also seems to be simple as long as the client supports the encoding. Is the only solution really a script that does this on the fly? I'm aware of mod_deflate and mod_gzip and it is not what I'm looking for - at least not alone. In particular I need an actual GZIP file and not just a deflated stream. Now I was thinking of using mod_ext_filter, but couldn't bridge the gap between rewriting the name of the (non-existent) file.gz to file on one side and the LocationMatch on the other. Here's what I have. RewriteRule ^(.*?\.ext)\.gz$ $1 [L] ExtFilterDefine gzip mode=output cmd="/bin/gzip" <LocationMatch "/my-files/special-path/.*?\.ext\.gz"> AddType application/octet-stream .ext.gz SetOutputFilter gzip Header set Content-Encoding gzip </LocationMatch> Note that the header for Content-Encoding isn't really needed by the clients in this case. They expect to see actual GZIP files, but I want to do this on-the-fly without caching (this is a test scenario).

    Read the article

  • How Do I Secure WordPress Blogs Against Elemento_pcx Exploit?

    - by Volomike
    I have a client who has several WordPress 2.9.2 blogs that he hosts. They are getting a deface kind of hack with the Elemento_pcx exploit somehow. It drops these files in the root folder of the blog: -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 default.php -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.asp -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.aspx -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.htm -rw-r--r-- 1 userx userx 1459 Apr 16 04:25 index.html -rwxr-xr-x 1 userx userx 1459 Apr 16 04:25 index.php* It overwrites index.php. A keyword inside each file is "Elemento_pcx". It shows a white fist with a black background and the phrase "HACKED" in bold letters above it. We cannot determine how it gets in to do what it does. The wp-admin password isn't hard, but it's also not very easy either. I'll change it up a little to show you what the password sort of looks like: wviking10. Do you think it's using an engine to crack the password? If so, how come our server logs aren't flooded with wp-admin requests as it runs down a random password list? The wp-content folder has no changes inside it, but is run as chmod 777 because wp-cache required it. Also, the wp-content/cache folder is run as chmod 777 too.

    Read the article

  • Excel file growing huge (>150 MB)

    - by Josh
    There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once. The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it. The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again. The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month. As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this: <NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/> <NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/> <NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/> I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?

    Read the article

  • CSS gradient not rendering in Windows Phone 8 WebBrowser Control

    - by SRSHawk
    I am facing an issue where, the CSS3 background is not rendered in WebBrowser control in Windows Phone 8. But same HTML when opened in WebBrowser in Windows Phone 8, it renders the gradient The HTML I am using is: <html> <head> <meta name="viewport" content="width=320, user-scalable=no, minimum-scale=1, maximum-scale=1"/> </head> <body style="margin:0px;overflow:hidden;"> <div id="im_c" style="height:48px;width:100%25; background: -ms-linear-gradient( bottom, #432100 30%, #00AAAA 70%);"> <div style="margin:0 auto;width:320px;"> Test </div> </div> <style> body {margin:0px} </style> </body> In Windows Phone 8, I use the HTML as below: WebBroswer WebView = new WebBrowser(); WebView.Height = 100; WebView.Width = 400; WebView.NavigateToString(@"<html><head><meta name=""viewport"" content=""width=320, user-scalable=no, minimum-scale=1, maximum-scale=1""/></head><body style=""margin:0px;overflow:hidden;""> <div id=""im_c"" style=""height:48px;width:100%25; background: -ms-linear-gradient( bottom, #432100 30%, #00AAAA 70%);""> <div style=""margin:0 auto;width:320px;"">Test</div></div> <style> body {margin:0px} </style> </body></html>"); In this case, the CSS gradient is not visible. Am I missing something?

    Read the article

  • Obey server_name in Nginx

    - by pascal
    I want nginx/0.7.6 (on debian, i.e. with config files in /etc/nginx/sites-enabled/) to serve a site on exactly one subdomain (indicated by the Host header) and nothing on all others. But it staunchly ignores my server_name settings?! In sites-enabled/sub.domain: server { listen 80; server_name sub.domain; location / { … } } Adding a sites-enabled/00-default with server { listen 80; return 444; } Does nothing (I guess it just matches requests with no Host?) server { listen 80; server_name *.domain; return 444; } Does prevent Host: domain requests from giving results for Host: sub.domain, but still treats Host: arbitrary as Host: sub-domain. The, to my eyes, obvious solution isn't accepted: server { listen 80; server_name *; return 444; } Neither is server { listen 80 default_server; return 444; } Since order seems to be important: renaming 00-default to zz-default, which, if sorted, places it last, doesn't change anything. But debian's main config just includes *, so I guess they could be included in some arbitrary file-system defined order? This returns no content when Host: is not sub.domain as expected, but still returns the content when Host is completely missing. I thought the first block should handle exactly that case!? Is it because it's the first block? server { listen 80; return 444; } server { listen 80; server_name ~^.*$; return 444; }

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • Configure nginx for multiple node.js apps with own domains

    - by udo
    I have a node webapp up and running with my nginx on debian squeeze. Now I want to add another one with an own domain but when I do so, only the first app is served and even if I go to the second domain I simply get redirected to the first webapp. Hope you see what I did wrong here: example1.conf: upstream example1.com { server 127.0.0.1:3000; } server { listen 80; server_name www.example1.com; rewrite ^/(.*) http://example1.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example1.com; access_log /var/log/nginx/example1.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example1.com; proxy_redirect off; } } example2.conf: upstream example2.com { server 127.0.0.1:1111; } server { listen 80; server_name www.example2.com; rewrite ^/(.*) http://example2.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example2.com; access_log /var/log/nginx/example2.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example2.com; proxy_redirect off; } } curl simply does this: zazzl:Desktop udo$ curl -I http://example2.com/ HTTP/1.1 301 Moved Permanently Server: nginx/1.2.2 Date: Sat, 04 Aug 2012 13:46:30 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: http://example1.com/ Thanks :)

    Read the article

  • SharePoint Search: processing filenames containing underscores

    - by Todd Owen
    We use SharePoint Server 2007 to allow employees to search network file shares, but it seems that underscores in filenames are not treated as word separators when indexing the files. As a result, a search for chocolate will: match "chocolate milkshake.doc" but not match "chocolate_cake.doc" (Of course, this is a simplified example; in practice the content of the second file might include the word "chocolate" and match on that instead of the filename. But the problem itself is real enough, because a common scenario in a corporate environment is that a user knows the the partial name of the file they are looking for and expects to see matching filenames at the top of the search results. And using underscores in filenames is a widely used convention within our company). Underscores are not treated as word separators in the file content either, although this is less of a concern for us. The root cause of this problem is possibly related to the behaviour of the word breakers that SharePoint uses (i.e. the language-specific DLLs that implement the IWorkBreaker interface), although I haven't confirmed this yet. Does anyone know of a workaround for this issue? I have tested with Search Server 2008 Express too (which is based on the same technology), and it is also affected. I do not know whether the problem is fixed in SharePoint 2010 or not.

    Read the article

  • apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName

    - by user35402
    I keep getting this warning when I (re)start Apache. Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName ... waiting apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [ OK ] This is the content of my etc/hosts file: #127.0.0.1 hpdtp-ubuntu910 #testproject.localhost localhost.localdomain localhost #127.0.1.1 hpdtp-ubuntu910 127.0.0.1 localhost 127.0.0.1 testproject.localhost 127.0.1.1 hpdtp-ubuntu910 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts This is the content of my /etc/apache2/sites-enabled/000-default file: <VirtualHost *:80> ServerName testproject.localhost DocumentRoot "/home/morpheous/work/websites/testproject/web" DirectoryIndex index.php <Directory "/home/morpheous/work/websites/testproject/web"> AllowOverride All Allow from All </Directory> Alias /sf /lib/vendor/symfony/symfony-1.3.2/data/web/sf <Directory "/lib/vendor/symfony/symfony-1.3.2/data/web/sf"> AllowOverride All Allow from All </Directory> </VirtualHost> When I go to http://testproject.localhost, I get a blank page can anyone spot what I am doing wrong?

    Read the article

  • location of index.html CentOS 6

    - by user2118559
    Based on this http://www.servermom.com/how-to-add-new-site-into-your-apache-based-centos-server/454/ tutorial installed Apache-based CentOS Server I use putty.exe as editor vi /etc/httpd/conf/httpd.conf at very bottom modified to <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/fikitipis.com/public_html ServerName www.fikitipis.com ServerAlias fikitipis.com ErrorLog /var/www/fikitipis.com/error.log CustomLog /var/www/fikitipis.com/requests.log common </VirtualHost> So expect that index is at /var/www/fikitipis.com/public_html When in browser type ip address of server, see Apache 2 Test Page powered by CentOS and so on You may now add content to the directory /var/www/html/ Then [root@vps ~]# ls /var/www/ see cgi-bin domain.com error fikitipis.com html icons Checking content of directories ls /var/www/domain.com/public_html, ls /var/www/fikitipis.com/public_html, /var/www/html/ are empty Where is index.html? Did touch /var/www/fikitipis.com/public_html/index1.html then vi /var/www/fikitipis.com/public_html/index1.html, typed a, then wrote some text in file, then Escape and shift+zz. And in browser http://111.111.11.111/index1.html and see what I had wrote. So until now seems that all works

    Read the article

  • Invalid Parameter on node puppet

    - by chandank
    I am getting an error of err: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter port at /etc/puppet/manifests/nodes/node.pp:652 on node test-puppet My puppet class: (The Line 652 at node.pp) node 'test-puppet' { class { 'syslog_ng': host => "newhost", ip => "192.168.1.10", port => "1999", logfile => "/var/log/test.log", } } On the module side class syslog_ng::config ( $host , $ip , $port, $logfile){ file {'/etc/syslog-ng/syslog-ng.conf': ensure => present, owner => 'root', group => 'root', content => template('syslog-ng/syslog-ng.conf.erb'), notify => Service['syslog-ng'], require => Class['syslog_ng::install'], } file {"/etc/syslog-ng/conf/${host}.conf": ensure => present, owner => 'root', group => 'root', notify => Service['syslog-ng'], content => template("syslog-ng/${host}.conf.erb"), require => Class['syslog_ng::install'], } } I think I am doing it as per the puppet documentation.

    Read the article

  • What GPT partition type to use for protecting DRBD metadata?

    - by Carsten Scholtes
    I'm planning to install a DRBD device on a (replicated) disk with two GPT partitions. DRBD requires some space for (preferentially "internal") metadata at the end of the underlying device. I'm hesitant to leave this space unpartitionend (or unformatted in a normal partition). I'd like to reserve an extra partition at the end of the underlying disk device for the metadata. (If I understand correctly, DRBD would not care about the partition or its type and could then use that space exclusively.) My question is: Which would be a suitable GPT partition type for such a metadata partition? It should not be a type interpreted while booting (such as EF00 EFI System). It should not be a type prone to be modified accidentialy by the booted OS (such as 8200 Linux swap, 8e00 Linux LVM, fd00 Linux raid). (The booted OS will be Ubuntu Linux 12.04.3.) It should not be a type indicating a normal filesystem (such as 0c01 or 8301), prone to be formatted correspondingly. It should not be a type requiring any special content in the partition (since the content is to be handled exclusively by DRBD). It should express the purpose of being reserved for something special (namely DRBD). (The types I listed are as provided by gdisk. I'm thinking about using some type unlikely to be used by the OS (maybe bf0a Solaris Reserved 4) or an invented(?) type such as fd01 (close to fd00 Linux raid…). Would something like this be suitable, too dangerous or even possible?)

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >