Search Results

Search found 22062 results on 883 pages for 'save location'.

Page 141/883 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • tar fails to open .tar file on OS X

    - by denonth
    I need to unarchive a file to the /Developer folder. My file is in /Users/User/Desktop/VMware/Downloads And I am trying tar -xf qt-everywhere-ios-4.8.0-arm7-nossl.tar.gz -C /Developer and keep getting: Lions-Mac:Downloads User$ tar -xf qt-everywhere-ios-4.8.0-arm7-nossl.tar -C /Developer tar: Error opening archive: Failed to open 'qt-everywhere-ios-4.8.0-arm7-nossl.tar' How can I achieve this? Install Qt for iOS SDK The Qt for iOS SDK has been configured to be installed in the default Xcode installation location /Developer. It is not possible to install the SDK into another location without first rebuilding it, as the install location is contained within the qmake executable, and that is built as part of Qt. To install the Qt for iOS SDK, open ‘Terminal’ and type the following from the command­-line: tar –xf qt­-everywhere-­ios­-4.8.0­-xxx.tar.gz –C /Developer (where xxx is an identifier which can be used to determine the build of the iOS SDK eg. arm7-­-nossl) This will install the Qt for iOS SDK into the following path: /Developer/Platforms/iPhoneOS.platform/Developer/usr/share/qt­-everywhere­-ios­-4.8.0

    Read the article

  • nginx static files caching doesn't work

    - by user74344
    here is my conf file: usr/local/nginx/sites-available/default server { listen 80; server_name localhost; location / { root html; index index.php index.html index.htm; } # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|swf)$ { expires 30d; } but it doesn't cache static files, how should I fix it? thanks a lot

    Read the article

  • nginx 500 error instead of 404

    - by arby
    I have the following nginx configuration (at /etc/nginx/sites-available/default) server { listen 80; ## listen for ipv4; this line is default and implied listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.php index.html index.htm; server_name _; location / { try_files $uri $uri/ /index.html; } error_page 404 /404.html; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } location ~ /\.ht { deny all; } } Instead of a 404 error, I'm getting 500 server errors on broken urls. How can I correct this?

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • Windows 7 (32bit) not adding favorites to Windows Explorer

    - by bsigrist
    I am attempting to add several locations on my disk to my "Favorites" in Windows Explorer. I have used this feature in the 64bit version of Windows 7 without a problem, but it does not seem to work in this install. Here is my methodology so far. 1.) Go to a location in Windows Explorer "C:\users\Benjamin" 2.) Right click on the "favorites" in the left hand folder navigation window and select "Add current location to Favorites" It does not fire an error, but the location does not appear under favorites. What might be happening here to prevent "favorites" from populating?

    Read the article

  • nginx regex characters that require quoting?

    - by Michael Louis Thaler
    So I was configuring nginx today and I hit a weird problem. I was trying to match a location like this: location ~ ^/([0-9]+)/(.*) { # do proxy redirects } ...for URLs like "http://my.domain.com/0001/index.html". This rule was never matching, despite the fact that it by all rights should. It took me awhile to figure out, based on this documentation, that some characters in regexes need to be quoted. The problem is, the documentation is for rewrites, and it specifically calls out curly braces, not square brackets. After a fair bit of experimentation that involved a lot of swearing, I discovered that I could fix the problem by quoting the regex like so: location ~ "^/([0-9]+)/(.*)" { # do proxy redirects } Is there a list somewhere of characters that nginx requires quoting regexes with? Or could there be something else going on here that I'm totally missing? This is my first nginx configuration job, so it's very possible I've misunderstood something...

    Read the article

  • Recording Skype Video

    - by Leah Peterson
    I have two problems - When doing a Skype video call, I can't see the other person. Skype says both my input and output are fast/good. I need a good recording program that will save both sides at once in a podcast. I've been playing around with Vodburner, but it seems to be very temperamental. I can't get anything longer than 30 seconds to save and even when if it does save, the file isn't compatible with Windows Live Movie Maker, even when I change the extension to .MP4 or .WMV. I tried to downlowd the format changer http://www.mediacoderhq.com/ that Vodburner suggests, but only got errors. I'm using Windows 7 with 62 GB free.

    Read the article

  • restricting access only through domains on nginx on virtual hosts

    - by Mo J. Mughrabi
    I have finished setting up nginx for virtual hosting, this is how my config files look like server { listen 80; server_name domain.com; access_log /home/domain.com/prod_webapp/logs/access.domain.com.log; error_log /home/domain.com/prod_webapp/logs/error.domain.com.log; location /static { root /home/domain.com/prod_webapp/mocorner/ph/; } location / { try_files $uri @uwsgi; } location @uwsgi { include uwsgi_params; uwsgi_pass unix:/tmp/domain_uwsgi.sock; }} on the same machine, I have domain1.com and domain2.com, each when i access I get its content which is great. My problem is that when i try to access the user using the IP address i get one of the sites in the virtual hosts too.. Although i disabled the default (removed the symbolic link) from sites-enabled folder but still not solved it for me. any suggestions?

    Read the article

  • google.com different IP in different countries. How?

    - by HeavyWave
    If you ping google.com from different countries you will get replies from local google servers. How does that work? Can a DNS record have multiple A addresses? Could someone point me to the technology they use to do that? Update. OK, so Google's DNS server gives out a different IP based on the location. But, as Alexandre Jasmin pointed out, how do they track the location? Surely their DNS won't ever see your IP address. Is the server querying Google's DNS guaranteed to be from the location it represents?

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • File Saving Sometimes Fails

    - by YellPika
    When I attempt to save files, it sometimes (randomly) fails. In Blender, I sometimes get "Version Backup Failed: File Saved With @". In Visual Studio, building sometimes fails with an error message indicating that the target file/exe cannot be overwritten. If I wait a bit, I can save fine. It's almost as if programs are taking an abnormal amount of time to 'let go' of the files. What could be causing this behaviour? This seems to be caused by Windows Live Mesh monitoring my files, and locking them whenever it uploads the new versions (BAD considering the amount of times I save my files, even redundantly). Any suggestions to work around this behaviour? Should I switch to a better service to sync my files?

    Read the article

  • nginx regex locations w/ different roots not working as expected

    - by Wells Oliver
    I have the following two rules: location / { root /var/www/default; } location ~* /myapp(.*)$ { root /home/me/myapp/www; try_files $uri $uri/ /handle.php?url=$uri&$args; } When I browse to myapp/foo it works- kind of, the error is logged as a 404: *3 open() "/var/www/default/handle.php" failed (2: No such file or directory) - so its handling the regex match but just not using the right document root-- why is this? For the record, I am trying to get /myapp/* requests handled by the second location, and everything else the first.

    Read the article

  • How can I send an email from Mail.app to Outlook with an attachment that does not embed into the email body?

    - by JAG2007
    I'm using Mail.app (on Mac OS X 10.6) and when I send an email to users on PC Outlook, with an attached image, they get the email as an image embedded into the body, not as an attachement. I even tried clicking "view as icon" before sending the attachment from Macmail, but that made no difference. I also tried this myself, sending from Mail.app over to my PC's Outlook, and I do get that same problem. In Outlook the image is not coming through as an attachment, but as an image embedded into the body of the email. The reason this is an issue primarily is because the user is then unable to click "save as" and has to actually copy and paste it into some other program, which means the file is converted from jpg or png to the bmp format. But beyond that, most of my recipients don't even know how to copy and paste it into another program to save it that way anyway. They need the "save attachment as" functionality.

    Read the article

  • Websphere SSL handshake with active directory cluster

    - by ring bearer
    We have a WebSphere based application that uses Active Directory(AD) based security configurations. Under WebSphere "Global security" we have configured the active directory server and connection parameters. Active directory server is actually a cluster of four servers, say, serverdc01, serverdc02,serverdc03 and serverdc04. Each of these servers have their own root certificate with CN=serverdc01, CN=serverdc02 ..so on. So to set up SSL communication, I need to retrieve certificate of active directory and save it in WebSphere's trust store. When I retrieve certificate by putting AD server name, port and retrieve certificate I randomly get certificate of one of the serverdc01,serverdc02 ... Then I save that certificate to trust store. Question is : Do I have to save certificate from each of the serverdc01,serverdc02 ...in cluster to WebSphere's trust store? What are general strategies so that each server in the cluster does not require its own root certificate?

    Read the article

  • How to set robots.txt globally in nginx for all virtual hosts

    - by anup
    I am trying to set robots.txt for all virtual hosts under nginx http server. I was able to do it in Apache by putting the following in main httpd.conf: <Location "/robots.txt"> SetHandler None </Location> Alias /robots.txt /var/www/html/robots.txt I tried doing something similar with nginx by adding the lines given below (a) within nginx.conf and (b) as include conf.d/robots.conf location ^~ /robots.txt { alias /var/www/html/robots.txt; } I have tried with '=' and even put it in one of the virtual host to test it. Nothing seemed to work. What am I missing here? Is there another way to achieve this?

    Read the article

  • VLC Media Server

    - by Josh
    We are using VLC on ubuntu, and trying to set up a streaming media server. We have the http interface working fine from remote computers, and we can also see the video playing as text if we don't screen VLC. Our problem is the output streaming. When we use the main VLC page you get when you goto the servers IP it does not save the output MRL (refreshing page it will go away, even after clicking save.) We tried to VLM page and it appears to work fine from the http page (it buffers, plays, timers go up when not paused, etc.) However, we still cannot connect remotely with a VLC client. The output parameters do save properly on the VLM page. We are noobs when it comes to this. Does anyone have a very to the point procedure of getting a file X to play and stream on ubuntu using VLC assuming VLC is installed?

    Read the article

  • How to unblock chm files - unblock button is missing

    - by Yoav
    I downloaded a CHM file. When I double click it it prompts me to open / save / cancel. Whether I open or save a new copy, the 'new' version will prompt the same open / save / cancel popup ad infinitum. Searching google it seems that Microsoft have deemed it right for security reason to block these files by default. The solution is to right click the file, and click the 'unblock' button at the bottom: The problem is that I don't have that button on my system: BTW, the button is also missing for .exe files.I'm using Win7 64bit. Any ideas?

    Read the article

  • Microsoft word 2007 unusual problem

    - by Nitz
    Hey guys,i was working today on Microsoft word 2007.their on the first line, as soon as try to save the file, then one sentence is written automatically. the sentence is like this This text was added by using code.if i try to remove that sentence then also , it comes again. if i try to save the file then, this sentence is included again.is any one had gone through this kind of error?if i try to take new file now, then also in new file if i don't write any thing, and then i save the file then this sentence is auto. included in the file.

    Read the article

  • OneNote can't connect to SkyDrive in Windows 8.1

    - by Greg
    Since I installed windows 8.1 I can't open my OneNote notebooks stored on skydrive with the 2013 Office OneNote app. When I click in the office app to open from skydrive it gives back: "We can't get your notebooks right now. Please try again later." I can open them without trouble in the modern UI onenote app, but I can't open password protected pages there. Also if I try to open it from a browser the error message follows: "We couldn't open that location. It might not exist or you might not have permission to open it." Neither can I create new notebooks on skydrive with the office app. "...The specified location is not available. -You do not have permissions to modify the specified location..." Can it be fixed somehow? Or can I at least save a notebook to my hard drive without opening it in office? The backup file got deleted with the win 8.1 installation.

    Read the article

  • How to get two seperate remote domain controllers with same IP to work?

    - by Mr. Mister
    Hi, I have a VPN setup between multiple locations. Between each location and the central point (me), is a trust between our domain controllers. It all works great.. A new location wants to join, but their AD controller is using an IP address that is already in use by another AD in a separate location. Neither locations can change their IP addresses, but apparently there is a NAT rule that could be used to allow communication between each AD controller? The central site has a Cisco 5510 firewall which could perform the NAT, but I am unsure of the logic behind the NAT rule. Is anyone able to explain or help out? Thanks.

    Read the article

  • Find edited Word attachment opened via Windows Mail (Vista)

    - by Tony Meyer
    If you open (not save) a Word attachment in Windows Mail, then it will open the document up in Microsoft Word. This is saved somewhere, and you can edit the file and save it (i.e. no "save as" dialog appears). If you then close Word without explicitly saving the file somewhere, how do you then find the edited file? (Assuming that it can be found, and isn't automatically deleted). Windows Search does not find the file (presumably because it's in the folders that are excluded from search).

    Read the article

  • Set Error-Pages for all Applications in Tomcat

    - by phisch
    I'm trying to set up custom error pages in tomcat 6, because I don't want the default ones to show up. My error pages are static html, no jsp or anything dynamic. I know how to do this through the web.xml in each application but I'd prefere to setup the error pages only once for the entire server. I tried to add the following fragment to the global web.xml (in conf), but no matter what I add under location, it does not show. <error-page> <error-code>404</error-code> <location>/404.html</location> </error-page> What do I need to do to gobally define custom error pages? Thanks!

    Read the article

  • Iptables based router inside KVM virtual machine

    - by Anton
    I have KVM virtual machine (CentOS 6.2 x64), it has 2 NIC: eth0 - real external IP 1.2.3.4 (simplified example instead of real one) eth1 - local internal IP 172.16.0.1 Now I'm trying to make port mapping 1.2.3.4:80 = 172.16.0.2:80 Current iptables rules: # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A PREROUTING -p tcp -m tcp -d 1.2.3.4 --dport 80 -j DNAT --to-destination 172.16.0.2:80 COMMIT # Completed on Fri Jun 29 17:53:36 2012 # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *mangle :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed on Fri Jun 29 17:53:36 2012 # Generated by iptables-save v1.4.7 on Fri Jun 29 17:53:36 2012 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] COMMIT # Completed on Fri Jun 29 17:53:36 2012 But there is nothing works, I mean it does not forwards that port. Similar configuration without virtualization seems to be working. What am I missing? Thanks!

    Read the article

  • Enable roaming profile from group policy

    - by Rob Nicholson
    I've had a reasonable look around the AD policies but am I right in saying the only place that you can enable & define the group policy location is by editing the user, i.e. there isn't a group policy setting to (say) "Set the profile location to \myserver\users\%username%\profile" for all users in group XYZ? I suspect this might be because of chicken & egg, i.e. group policy is applied after the profile has been loaded and therefore can't specify the location. Cheers, Rob.

    Read the article

  • make IIS 7.5 cache static content files over diferent pages

    - by Achilles
    On a Windows 2008 R2, using DNS and IIS I've established my development test server; i.e. I'll have a web application that I can browse on http://test.dev I've moved all the static content files like images, js files and css files into another application which is visible on http://cdn.test.dev test.dev, uses cdn.test.dev urls like http://cdn.test.dev/js/jquery.js to load js, css and images. When I first load "~/" of test.dev, all files will load with a response code of 200; when I press F5 in Firefox, all files, except the "~/default.aspx", will load with 304 response code; but pressing Ctrl+F5 loads them again with a 200 code; if I browse another url like "~/pages/" in test.dev, all of those static files will reload with a 200 code... Is this normal or I'm doing something wrong? Actually, I'm looking for a behavior like this: I want the client to load http://cdn.test.dev/js/jquery.js, only once. I want the client's browser to use this jquery.js file, from cache, in all other pages of test.dev Is this possible? This is the web.config file I have in the root directory of cdn.test.dev: <configuration> <system.webServer> <caching> <profiles> <add extension=".png" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".gif" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".jpg" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".js" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".css" policy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> <add extension=".axd" kernelCachePolicy="CacheUntilChange" varyByHeaders="User-Agent" location="Client" /> </profiles> </caching> <httpProtocol allowKeepAlive="true"> <customHeaders> <add name="Cache-Control" value="public, max-age=31536000" /> </customHeaders> </httpProtocol> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true"> <remove name="RadUploadModule" /> <remove name="RadCompression" /> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" preCondition="integratedMode" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" preCondition="integratedMode" /> </modules> <handlers> <remove name="ChartImage_axd" /> <remove name="Telerik_Web_UI_SpellCheckHandler_axd" /> <remove name="Telerik_Web_UI_DialogHandler_aspx" /> <remove name="Telerik_RadUploadProgressHandler_ashx" /> <remove name="Telerik_Web_UI_WebResource_axd" /> <add name="ChartImage_axd" path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_SpellCheckHandler_axd" path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_DialogHandler_aspx" path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_RadUploadProgressHandler_ashx" path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" preCondition="integratedMode" /> <add name="Telerik_Web_UI_WebResource_axd" path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" preCondition="integratedMode" /> </handlers> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10485760" /> </requestFiltering> </security> <staticContent> <clientCache cacheControlMode="UseExpires" httpExpires="Wed, 01 Jan 2020 00:00:00 GMT"/> </staticContent> </system.webServer> <appSettings /> <system.web> <compilation debug="false" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="telerik" namespace="Telerik.Web.UI" assembly="Telerik.Web.UI" /> </controls> </pages> <httpHandlers> <add path="ChartImage.axd" type="Telerik.Web.UI.ChartHttpHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.SpellCheckHandler.axd" type="Telerik.Web.UI.SpellCheckHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.DialogHandler.aspx" type="Telerik.Web.UI.DialogHandler" verb="*" validate="false" /> <add path="Telerik.RadUploadProgressHandler.ashx" type="Telerik.Web.UI.RadUploadProgressHandler" verb="*" validate="false" /> <add path="Telerik.Web.UI.WebResource.axd" type="Telerik.Web.UI.WebResource" verb="*" validate="false" /> </httpHandlers> <httpModules> <add name="RadUploadModule" type="Telerik.Web.UI.RadUploadHttpModule" /> <add name="RadCompression" type="Telerik.Web.UI.RadCompression" /> </httpModules> <httpRuntime maxRequestLength="10240" /> </system.web> </configuration> and this is the resulting response header for http://cdn.test.dev/css/global.css: Cache-Control: private,public, max-age=31536000 Content-Type: text/css Content-Encoding: gzip Expires: Wed, 01 Jan 2020 00:00:00 GMT Last-Modified: Mon, 06 Sep 2010 08:53:06 GMT Accept-Ranges: bytes Etag: "0454eca04dcb1:0" Vary: Accept-Encoding Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Mon, 06 Sep 2010 14:57:08 GMT Content-Length: 4495

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >