Search Results

Search found 30301 results on 1213 pages for 'content db'.

Page 344/1213 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • How to create hash or yml from top level attributes values of node?

    - by Sarah Haskins
    I have a chef recipe where I want to take all of the attributes under node['cfn']['environment'] and write them to a yml file. I could do something like this (it works fine): content = { "environment_class" => node['cfn']['environment']['environment_class'], "node_id" => node['cfn']['environment']['node_id'], "reporting_prefix" => node['cfn']['environment']['reporting_prefix'], "cfn_signal_url" => node['cfn']['environment']['signal_url'] } yml_string = YAML::dump(content) file "/etc/configuration/environment/platform.yml" do mode 0644 action :create content "#{yml_string}" end But I don't like that I have to explicitly list out the names of the attributes. If later I add a new attributes it would be nice if it automatically was included in the written out yml file. So I tried something like this: yml_string = node['cfn']['environment'].to_yaml But because the node is actually a Mash, I get a platform.yml file like this (it contains a lot of unexpected nesting that I don't want): --- !ruby/object:Chef::Node::Attribute normal: tags: [] cfn: environment: &25793640 reporting_prefix: Platform2 signal_url: https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/... environment_class: Dev node_id: i-908adf9 ... But what I want is this: ---- reporting_prefix: Platform2 signal_url: https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/... environment_class: Dev node_id: i-908adf9 How can I achieve the desired yml output w/o explicitly listing the attributes by name?

    Read the article

  • IIS needs to be restarted every morning

    - by Kevin
    In one of my Application and DB Server , SSIS package runs at night. Every morning i need to reset IIS to work the Application Fast and smoothly. One day i tried to SKIP the SSIS Package and next day i hvnt Done the IIS reset. What could be the problem. Is there any alternate Solution for IIS reset. How can i schedule and make sure the IIS is RESTARTED through Batch File / s. Application is developed in .NET and DB is SQL latest version. The application is hostes on cloud server. Your prompt reply will be helpful for me.

    Read the article

  • Can a MySQL slave be a master at the same time?

    - by mmattax
    I am in the process of migrating 2 DB servers (Master & Slave) to two new DB Servers (Master and Slave) DB1 - Master (production) DB2 - Slave (production) DB3 - New Master DB4 - New Slave Currently I have the replication set up as: DB1 -> DB2 DB3 -> DB4 To get the production data replicated to the new servers, I'd like to get it "daisy chained" so that it looks like this: DB1 -> DB2 -> DB3 -> DB4 Is this possible? When I run show master status; on DB2 (the production slave) the binlog possition never seems to change: +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000020 | 98 | | | +------------------+----------+--------------+------------------+ I'm a bit confused as to why the binlog position is not changing on DB2, Ideally it will be the master to DB3.

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • Mongodump on Gridfs is killing the host IOs

    - by Raphael
    I'm trying to make a mongodump from our production mongodb while the production is running. We have three production instances, one regular mongodb, one with very few gb of data on gridfs, one with a larger amount of data on gridfs. All mongodb instances are running in version 2.4.9 on a ubuntu 10.04 virtual server. I use a mongodump command to export the bases to another server. Unfortunately our machines are virtually hosted in a "low performances" datacenter (vmware based) so when I try to export the large gridfs db, the disk IO hits 100% (and 50% of the cpu starts waiting for IO too). This has a very negative impact on the production applicatiosn because db access time is excessively increased, making the applications unusable. I'm looking for a way to regulate the mongodump so the export goes slower but cooler on the hardware ressources allowing better performances for the applications to run. Has anyone had a similar scenario ?

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • Scaling a video processing application on EC2?

    - by Stpn
    I am approaching the need to scale a video-processign application that runs on EC2. So far the setup is one machine: Backbonejs frontend Rails 3.2 Postgresql Resque + S3 for storage The flow of the app is as follows: 1) Request from frontend. Upload a video. 2) Storing video 3) Quering external APIs. 4) Processing / encoding videos. 5) Post to frontend. I can separate the backend and frontend without any problems, but when it comes to distributing the backend between several servers I am a bit puzzled. I can probably come up with a temporary solution (like just duplicating apps making several instances), but since I don't really have expertise in backend system administration, there can be some fundamental mistakes.. Also I would rather have something that is scalable. I wonder if anyone can give some feedback on the following plan: A) Frontend machine. Just frontend, talks to backend via REST Api of sorts. B) Backend server (BS), main database. Gets request from 1), posts to 2) saves uploads to 3) C) S3 storage. D) Server for quering APIs. Basically just a Resque workers, that post info back to 2) E) Server for video encoding. Processes videos uploaded on 3) and uploads them back. So I will have: A)frontend \ \ B)MAIN_APP/DB ----- C)S3 Storage (Files) / \ / / \ / D)ExternalAPI_queries E)Video_Processing (redundant DB) (redundant DB) All this will supposedly talk to each other via HTTP requests. My reason for this is that Video Processing part is really the most resource-intensive and I would just run barebones application that accepts requests and starts processing them. Questions: 1) In this setup I will have the main database at B) and all other servers will communicate with it via HTTP requests (and store duplicates of databases also I guess..for safety reasons). Is it the right approach or should I have 1 database that everyone connects to (how then?) 2) Is it a good idea to separate API queries from Video Processing part? Logically they are very close (processing is determined by the result of API queries), but resource-wise Video Processing is waaay more intensive. 3) what should I use to distribute calls between backend apps based on load?

    Read the article

  • Mysql Master-ColdMaster

    - by enedebe
    I explain my case: I'm at Amazon AWS and I want to be fault tolerant on a entire region failure. My basic problem is to have the db in sync with 2 regions. My options: Master-Master (high lag) Hand made sync every 5 minutes Master-ColdMaster?! (copy on the fly but Master won't wait the other region commit) In my system we could afford loosing a piece of data (we're not a bank) the last inserts in the db, but we could not afford more than 10 minutes of downtime. The database is small and the level of inserts is low, and I wouldn't affect the normal usage waiting other region commit. Is the 3 solution posible? And the most important, once the primary fail how we can detect and change the rol between master-coldmaster -- coldmaster-master ? Is there any clean-mode to restore between failure? Thank's!

    Read the article

  • SCO Unix - finding and pulling data

    - by lxlxlxl
    We recently acquired a company that was running a legacy application on a SCO Unix box. I can tell that the app is running off of http, but its reporting and data export functions are limited and I'd like to find a master db or data source on the box that maybe I can migrate into something more useful. Any ideas how I could possibly find that? Would there be something in apache or startup scripts that might point to a master db? do Unix applications have standard data source locations? I'm not sure if I'm wording this right so apologies in advance.

    Read the article

  • How to connect MySQL database of local server in NetBeans 7.0.1 ( windows)?

    - by diEcho
    Hello All, I am using NetBeans IDE 7.0.1 on Windows 7 very first time for my php. Actually in my company there is a local server ( 192.168.1.99) where all projects resides and we access phpmyadmin of that local server, Although I have added my project folders with NetBeans (this was also very hectic) but now I am having problem to connect database of my local server as I can see 192.168.1.99/phpmyadmin through my browser. I have set below value Server Host : localhost, Server port number : 3306, Administrator username : keshav Administrator password : ****** and when i click on connect, a popup error windows appears with below text Unable to connect to the MySQL server: org.netbeans.api.db.explorer.DatabaseException: org.netbeans.api.db.explorer.DatabaseException: java.sql.SQLException: Access denied for user 'keshav'@'localhost' (using password: YES). The server may not be running or your MySQL connection properties may not be set correctly. Do you want to edit your MySQL connection properties? Please help me out. Thanks

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • Do I need to disable access to a publisher database when setting up SQL Server 2000 Transactional Re

    - by Kev
    I have a production database i.e. where there are constant updates and I've configured this to be published to another server using Transactional Replication. When I configure transactional replication I've been doing the following: disable access to the source database backup source DB then restore to subscription server configure replication re-enable DB access to our apps The problem with this approach is scheduling in downtime, having to suspend all the various timed scheduled tasks we run and shutting down access to our various applications that are dependant on this database. Can I just configure transactional replication without disabling access to the publishing database and the subscriber database will correctly catch up? i.e. are all the DML statements queued on the publisher and as soon as the subscriber is ready they are picked off and executed?

    Read the article

  • Working with a copy of my Virtual Machine

    - by Gaby Reyna
    Hi there I'm trying to make a backup/copy of my virtual machine, it's installed in a Windows Server 2000 and I want to make some modifications/tests without changing the original one. The copy is to be used in Windows 7, what I'm trying to do is work/modify an application that communicates with a DB, this application is hosted on the VM, the DB too, and since I don't want to screw up the stable version I want to know how to copy the VM to my desktop pc to experiment without worries. Now, someone told me I might have problems with the IP 'cause the original will have the same IP, and if I change it, it won't work properly. Is this true? If it is indeed true, any suggestions??

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • Backup Script - Could Not Open Input File

    - by Iestyn
    this is the backup script that I've got going: http://pastebin.com/4g4E6wUz This is the cron info: /usr/local/bin/php /home/backups/backup-db.php --filename-dated ALL No matter what I do, I keep on getting this error: Could not open input file: /home/backups/backup-db.php - That's the correct location of the file. I just don't know what else to try, I feel I've been working on this for so long now that I've explored every avenue, on the other hand sometimes I think that the time I've spent on it is clouding my thoughts and I'm missing something stupidly obvious. Just wondering if someone can give me a few pointers? Also on a last note, does anyone know of a way/article to auto generate a full backup of cPanel every * amount of days and store it in a location that I want? Kind Regards.

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • Can't find windows 2000 domain after PDC Change

    - by Mark A Kruger
    This is a windows 2000 domain issue. I had an old win2000 PDC that was beginning to fail. So, trying to be pre-emptive, I installed a new BDC, then "demoted" the old PDC and took it off the network. Now it appears that no member server can "find" the domain anymore. No logins work (for services or a RDP or anything). What I've tried (based on googling): Verified sysvol is shared on all servers. Used nslookup to verify that DC's are being found. netdiag /fix meta data cleanup routines. verified no firewall issues (port 389 etc) seizing all roles to new PDC (I did that as part of the original promotion). LMHOST file and Netbios settings. At the moment it seems like I can get the DC's returned but cannot contact them. I'm at a loss. My latest attempt was to remove a member server from the domain and try to "re-add" it. When I do that I get this message: The query was for the SRV record for _ldap._tcp.dc._msdcs.cfwebtools.com The following domain controllers were identified by the query: db-dev1.cfwebtools.com file-prod1.cfwebtools.com cfwt-pdc2.cfwebtools.com However no domain controllers could be contacted. It then goes on to ask if I've checked my A record and made sure they are running. Is there a way to force this domain to be seen? I also shared sysvol (or double checked it) and restarted the dfsr service. More information. I got looking at sysvol and found it was not shared on 2 of these servers. Only one of them (db-dev1) has a "good" or at least "populated" sys vol store. So I tried doing a "d2" recovery of my PDC against that good sysvol. But it never synchs - or at least it does not seem to synch. I'm guessing if I could get sysvol and netlogin to kick in and replicate that would fix my issue. I think these DC's aren't responding because they are waiting for replication which is broken somehow. Would taking down all the DC's except for db-dev1 fix the issue - at least temporarily? I know I can't just copy the sysvol stuff over to the other 2 can I?

    Read the article

  • Some Emails incoming to Outlook 2007 are blank, same emails work fine on webmail, iphone, etc

    - by Funran
    This is a pretty easy problem to describe. Basically users who have just been upgraded to Outlook 2007 (yeah I know 2010 is out), are not receiving SOME emails (from outside our domain, ie hotmail, yahoo). Receiving is not the correct word, these emails come in, along with their attachments, subjects, to/from line, etc. But the body is blank. If the same user goes into their webmail, iphone, blackberry instead, they can read the message fine. It's clear to me that something in Outlook 2007 is not generating the body correctly, so it just strips it. I just don't know WHY. Our mail server was recently upgraded to Exchange 2010, users on 2010 running outlook 2003 are working fine, it's just the random emails for users using 2007. I hope I made that clear enough, thank you for any future help guys. EDIT: I don't see rft, but i swear I've seen it before. Here is the view source on a recent email. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"><html><head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <meta name="GENERATOR" content="MSHTML 8.00.6001.19120"> <DEFANGED_style_0 <="" style=""> </head> <body bgcolor="#ffffff"> <p><DEFANGED_DIV><font color="#0000ff" size="2" face="Calibri">MS,</font></p><DEFANGED_DIV> <p><DEFANGED_DIV><font color="#0000ff" size="2" face="Calibri">Could you tell me please what the legal descrip &amp; Topo Quad name is for this Monroe P.ID Site?</font></p><DEFANGED_DIV> <p><DEFANGED_DIV><em><font color="#0000ff" size="2" face="Calibri">Thanks, Henry Roye</font></em></p><DEFANGED_DIV></body></html>

    Read the article

  • Is it bad to have a very full hard drive on a high traffic database server?

    - by MikeN
    Running an Ubuntu server with MySQL for a high traffic production database server. Nothing else is running on the machine except the MySQL instance. We store daily database backups on the DB server, is there any performance hit or reason why we should keep the hard disk relatively empty? If the disk is filled up to 86%+ with the database and all of the backups, does it hurt performance at all? So would the DB server running with 86-90%+ full capacity perform less well in any way than the server running with only a 10% full disk? The total disk size on the server is over 1 TB so even 10% of the disk should be enough for basic O/S swapping and such.

    Read the article

  • Best idea dataserver serving small pictures 40 ko

    - by Nicolas Manzini
    I'm designing the server structure for my application in case things go well. I have one server DB connected to multiple server who process connections. All those with lots of RAM and fast processors. (still looking for a way to use the multithread because now it's dumb apache php... so loooots of ram needed). Upon an answer from those servers, the client can then connect to another server to retrieve pictures using the address he previously got from the db. Is it a good idea to have one database server with let's say nginx and ssd disk having to send all pictures to everybody? or should I have multiple server accessing to a shared ssd disk drive or multiple disk updating each other? Also should I put a lot of RAM on the database server? because probably there wont be a picture more popular than another.

    Read the article

  • Force encoding with IIS 7

    - by Cédric Boivin
    I try to force encoding with IIS 7. When I add in the http response headers the key : Content-Type and value charset=utf-8 i got this key content-type : text/html,content-type=utf-8 it's there a way to remove the comma ? Thanks Justin for your answer. But it's seen don't work. There is my config, i need to do that for asp classic. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <remove fileExtension=".html" /> <remove fileExtension=".hxt" /> <remove fileExtension=".htm" /> <remove fileExtension=".asp" /> <mimeMap fileExtension=".htm" mimeType="text/html" /> <mimeMap fileExtension=".hxt" mimeType="text/html" /> <mimeMap fileExtension=".html" mimeType="text/html" /> <mimeMap fileExtension=".asp" mimeType="text/html; charset=UTF-8" /> </staticContent> </system.webServer> </configuration>

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >