Search Results

Search found 28650 results on 1146 pages for 'content length'.

Page 768/1146 | < Previous Page | 764 765 766 767 768 769 770 771 772 773 774 775  | Next Page >

  • All in a Day's Work: Unblocking Multiple Downloaded Files with a Single Command

    - by Sam Abraham
    Files downloaded using Internet Explorer retain Internet Zone permission level and hence are “Blocked” by default on Windows 7 machines. Honestly, while an added overhead for developers; I really appreciate this feature as it provides a good protection layer for casual web users. My workaround is to simply unblock the downloaded zip file (if download was a zip file) which, in turn, unblocks the files stored within. Today however, I was left with a situation where I had to “Open” and “Copy” the content rather than “Save” a zip file. That of course left me with a few dozen files I have to manually unblock. A few minutes of internet search lead me to the link below which worked like a charm: 1-Download streams.exe from SystInternals - http://technet.microsoft.com/en-us/sysinternals/bb897440.aspx 2-Go to command prompt (cmd.exe) 3-Navigate to where you have streams.exe installed 4-Use command line switches: streams.exe –s –d “<folder path>” This removed the Internet Zone restrictions from all files under “<folder path>” and its subfolders as well. [Deleted :Zone.Identifier:$DATA] References: http://social.technet.microsoft.com/Forums/en-US/itproxpsp/thread/806f0104-1caa-4a66-b504-7a681d1ccb33/

    Read the article

  • Block by file type, but just file extension using MDaemon

    - by Arjun Rajagopalan
    I've had users sending copyrighted files (songs, videos) to each other over email. I blocked the file extensions .mp3 etc. What some users have done is to rename files to .doc etc. I cant block .doc etc filetypes because they are needed for day-to-day work. I'm using MDaemon 12 mailserver, Does anyone know how to make it block these attachments? I've been working on some content scanning for filetype code, but was wondering if there is a already made solution?

    Read the article

  • Is Wordpress more appropriate than Magento/Opencart for site like this?

    - by Alex
    The premise of the site is that a user pays a small fee to advertise an item that they want to sell. Therefore the user is responsible for adding the "products", not the administrator. The product upload will create a product page for that item. This is a rather common framework that I'm sure you're familiar with. My initial thought was that it would be best suited using Magento - mainly because it needs to accept payments - and the products will grow to form a catalog of categorized products. However - there is no concept of a shopping cart. A buyer does not buy the item online, or go to a checkout. They simply look at the product, and contact the seller if they like it. The buyer and seller then take it from there. For this reason, I then begin to suspect that Magento is perhaps too overkill, or just simply not the right CMS if there is on checkout procedure (other than the uploader making a payment) So then I begin to think Wordpress....Hmmm Feature requirements: User's can add content via a form process User's can be directed to a payment gateway For each product listing - a series of photographs shall be displayed, in thumbnail form Zoom capabilities/rotate on the images would be a welcome feature In short - e-commerce CMS, or something more simple?

    Read the article

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Reverse proxy for a subdirectory in nginx

    - by Maple
    I want to set up a Reverse proxy on my VPS for my Heroku app (http://lovemaple.heroku.com) So if I visit mysite.com/blog I can get the content in http://lovemaple.heroku.com I followed the instructions on the Apache wiki. location /couchdb { rewrite /couchdb/(.*) /$1 break; proxy_pass http://localhost:5984; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } I changed it to fit my situation: location /blog { rewrite /blog/(.*) /$1 break; proxy_pass http://lovemaple.heroku.com; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } When I visit mysite.com/blog, the page show up, but js/css file cannot be gotten (404). Their link becomes mysite.com/style.css but not mysite.com/blog/style.css. What's wrong and how can I fix it?

    Read the article

  • The last word on C++ AMP...

    - by Daniel Moth
    Well, not the last word, but the last blog post I plan to do here on that topic. Over the last 12 months, I have published 45 blog posts related to C++ AMP on the Parallel Programming in Native Code, and the rest of the team has published even more. Occasionally I'll link to some of them from my own blog here, but today I decided to stop doing that - so if you relied on my personal blog pointing you to C++ AMP content, it is time you subscribed to the msdn blog. I will continue to blog about other topics here of course, so stay tuned. So, for the last time, I encourage you to read the latest two blog posts I published on the team blog bringing together essential reading material on C++ AMP Learn C++ AMP - a collection of links to take you from zero to hero. Present on C++ AMP - a walkthrough on how to give a presentation including slides. Got questions on C++ AMP? Hit the msdn forum! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • What are some efficient ways to set up my environment when working on a remote site?

    - by Prefix
    Hello fellow Programmers, I am still a relatively new programmer and have recently gotten my first on-campus programming position. I am the sole dev responsible for 8 domains as well as 3 small sized PHP web apps. The campus has its web environment divided into staging and live servers -- we develop on the staging via SFTP and then push the updates to the live server through a web GUI. I use Sublime Text 2 and the Sublime SFTP plugin currently for all my dev work (its my preferred editor). If I am just making an edit to a page I'll open that individual file via the ftp browser. If I am working on the PHP web app projects, I have the app directory mapped to a local folder so that when I save locally the file is auto-uploaded through Sublime SFTP. I feel like this workflow is slow and sub-optimal. How can I improve my workflow for working with remote content? I'd love to set up a local environment on my machine as that would eliminate the constant SFTP upload/download, but as I said there are many sites and the space required for a local copy of the entire domain would be quite large and complex; not to mention keeping it updated with whatever the latest on the staging server is would be a nightmare. Anyone know how I can improve my general web dev workflow from what I've described? I'd really like to cut out constantly editing over FTP but I'm not sure where to start other than ripping the entire directory and dumping it into XAMP.

    Read the article

  • Skin Object Tokens for DotNetNuke 5 - 8 Videos

    In this tutorial we demonstrate how to use Skin Object Tokens in DotNetNuke v5 and above. Skin Object tokens are a new skinning method introduced in DotNetNuke 5 for adding tokens into a DotNetNuke skin. A Skin Object Token is a web user control, it covers skin elements such as the logo, menu, search, login links, date, copyright, languages, links, banners, privacy, terms of use etc. This new Object token method has been introduced into DotNetNuke with the idea of making it simpler to add a skin object into a DotNetNuke skin. The videos contain: Video 1 - Introduction to HTML Object Token Skinning Video 2 - Basic Styling of a Skin and Creating Multiple Content Panes Video 3 - Styling, Control Panel, Login and Register Skin Object Tokens Video 4 - Packaging, Installing, Testing and Viewing the ASCX Version of the Skin Video 5 - Viewing the Attributes for Skin Object Tokens, Logo Token, Search Token Video 6 - Breadcrumb Token, Text Token and Localization, Links Token Video 7 - More Skin Tokens and Token Replacement Video 8 - Demonstration of the Object Tokens and Bug Fixing Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The battery indicator in Unity panel not showing up

    - by user61415
    I installed Ubuntu 12.04 with Wubi. Well after being completely dazzled with the amount of free content in the Software Centre, I decided to go deeper and start messing with settings. Well after changing the screen brightness the highest level I noticed that there wasn't an indicator for how much battery was left in my laptop. I looked up online on got 2 suggestions on how to fix: Right click on the Unity panel and add an indicator Set it to show in the power settings menu. Well I did both when I right click at the top menu nothing comes up and setting it to show does nothing either. Then I tried installing something in the Software Centre. I got something but when I activated it it said I had 0% power left even though I was charging and at %100 according the Light in the front of my laptop. So now I'm thinking that it doesn't even recognise my computer as a laptop which is weird because in the display settings it says my screen size is set to laptop. How can I install it? I don't know what version it is other then Ubuntu 12.04 and no matter what the icon does not appear with the

    Read the article

  • Text template or tool for documentation of computer configurations

    - by mjustin
    I regularly write and update technical documentation which will be used to set up a new virtual machine, or to have a lookup for system dependencies in networks with around 20-50 (server-side) computers. At the moment I use OpenOffice Writer with text tables, and create one document per intranet domain. To improve this documentation, I would like to collect some examples to identify areas where my documents can be improved, regarding general structure and content, to make it easy to read and use not only for me but also for technical staff, helpdesk etc. Are there simple text templates (for example for OpenOffice Writer) or tools (maybe database-driven) for structured documentation of a computer configuration? Such a template / tool should provide required and optional configuration sections, like 'operating system', 'installed services', 'mapped network drives', 'scheduled tasks', 'remote servers', 'logon user account', 'firewall settings', 'hard disk size' ... It is not so much low-level hardware docs but more infrastructure / integration information in these documents (no BIOS settings, MAC addresses).

    Read the article

  • Nginx & Apache Cannot get try_files to work with permalinks

    - by tcherokee
    I have been working on this for the past two weeks not and for some reason I cannot seem to get nginx's try_files to work with my wordpress permalinks. I am hoping someone will be able to tell me where I am going wrong and also hopefully tell me if I made any major errors with my configurations as well (I am an nginx newbie... but learning :) ). Here are my Configuration files nginx.conf user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## # Defines the cache log format, cache log location # and the main access log location. log_format cache '***$time_local ' '$upstream_cache_status ' 'Cache-Control: $upstream_http_cache_control ' 'Expires: $upstream_http_expires ' '$host ' '"$request" ($status) ' '"$http_user_agent" ' ; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } mydomain.com.conf server { listen 123.456.78.901:80; # IP goes here. server_name www.mydomain.com mydomain.com; #root /var/www/mydomain.com/prod; index index.php; ## mydomain.com -> www.mydomain.com (301 - Permanent) if ($host !~* ^(www|dev)) { rewrite ^/(.*)$ $scheme://www.$host/$1 permanent; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # All media (including uploaded) is under wp-content/ so # instead of caching the response from apache, we're just # going to use nginx to serve directly from there. location ~* ^/(wp-content|wp-includes)/(.*)\.(jpg|png|gif|jpeg|css|js|m$ root /var/www/mydomain.com/prod; } # Don't cache these pages. location ~* ^/(wp-admin|wp-login.php) { proxy_pass http://backend; } location / { if ($http_cookie ~* "wordpress_logged_in_[^=]*=([^%]+)%7C") { set $do_not_cache 1; } proxy_cache_key "$scheme://$host$request_uri $do_not_cache"; proxy_cache main; proxy_pass http://backend; proxy_cache_valid 30m; # 200, 301 and 302 will be cached. # Fallback to stale cache on certain errors. # 503 is deliberately missing, if we're down for maintenance # we want the page to display. #try_files $uri $uri/ /index.php?q=$uri$args; #try_files $uri =404; proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_504 http_404; } # Cache purge URL - works in tandem with WP plugin. # location ~ /purge(/.*) { # proxy_cache_purge main "$scheme://$host$1"; # } # No access to .htaccess files. location ~ /\.ht { deny all; } } # End server gzip.conf # Gzip Configuration. gzip on; gzip_disable msie6; gzip_static on; gzip_comp_level 4; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; proxy.conf # Set proxy headers for the passthrough proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header X-Cache-Status $upstream_cache_status; backend.conf upstream backend { # Defines backends. # Extracting here makes it easier to load balance # in the future. Needs to be specific IP as Plesk # doesn't have Apache listening on localhost. ip_hash; server 127.0.0.1:8001; # IP goes here. } cache.conf # Proxy cache and temp configuration. proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:10m max_size=1g inactive=30m; proxy_temp_path /var/www/nginx_temp; proxy_cache_key "$scheme://$host$request_uri"; proxy_redirect off; # Cache different return codes for different lengths of time # We cached normal pages for 10 minutes proxy_cache_valid 200 302 10m; proxy_cache_valid 404 1m; The two commented out try_files in location \ of the mydomain config files are the ones I tried. This error I found in the error log can be found below. ...rewrite or internal redirection cycle while internally redirecting to "/index.php" Thanks in advance

    Read the article

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • PDF export printing in Internet Explorer [closed]

    - by user619804
    protected static byte[] exportReportToPdf(JasperPrint jasperPrint) throws JRException { JRPdfExporter exporter = new JRPdfExporter(); ByteArrayOutputStream baos = new ByteArrayOutputStream(); exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRExporterParameter.OUTPUT_STREAM, baos); exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); exporter.exportReport(); return baos.toByteArray(); } We are using code like this to export a PDF document from a Jasper application. The line exporter.setParameter(JRPdfExporterParameter.PDF_JAVASCRIPT, "this.print({bUI: true,bSilent: false,bShrinkToFit: true});"); adds JavaScript to send the PDF document directly to the printer. The expected behavior is that a print dialog will come up with a preview of the PDF document. This works fine most of the time - except I am having problems about one out of every 5-6 times in Internet Explorer 8 and Firefox. What happens is - the print preview dialog with the PDF document does not appear or it appears with a blank document in the preview window. -I've tried a number of different JavaScripts (different params to this.print() via exporter.setParameter -I've tried setting different response headers such as response.setContentType("application/pdf"); response.setHeader("Content-disposition","inline; filename=\"" + reportName + "\""); response.setContentLength(baos.size()); these did not seem to help This seems to be an IE and FF issue. Has anyone ever dealt with this problem? I need to get it to work across all browsers 100% of the time. Perhaps a different approach to accomplish the goal of sending the PDF document export directly to the printer? or a third party library that will work across browsers?

    Read the article

  • web spidering/crawling, can i do it or just search engines?

    - by bboyreason
    i already had a question answered about web-scraping with wget. but as i read a little more, i realize i may be looking for a web-crawling program. particularly the part about web-crawlers being able to get specific data like links or, in my case, products. all of the products on my site have the following naming convention, website.com/uniqueAlphaNumericID.html as far as i know, no dynamic content generation is being used and only one page per one item in the above format. should i just be thinking about: wget website.com | grep *.html or should i be looking into spiders/crawlers?

    Read the article

  • Windows 7 MCE client server HTPC

    - by Dan Hook
    My HTPC is downstairs connected to my large screen. I would like to use my desktop upstairs to record over the air HDTV and stream it to the HTPC. I have Windows 7 Professional installed on the desktop. I currently have XP on the HTPC, but I'm going to upgrade it to Windows 7. Is there a particular flavor of Win 7 I should use? Is it possible to record on the desktop and use Windows MCE on the HTPC to watch the recorded content? What about live TV? The consensus I've seen is that Windows 7 cannot be configured as a Windows Media Center Extender. Is that the case? If so, what's the cheapest solution for an extender?

    Read the article

  • why are my players drawn top the side of my viewport

    - by Jetbuster
    Following this admittedly brilliant and clean 2d camera class I have a camera on each player, and it works for multiplayer and i've divided the screen into two sections for split screen by giving each camera a viewport. However in the game it looks like this I'm not sure if thats their position relative to the screen or what The relevant gameScreen code, the makePlayers is setup so it could theoretically work for up to 4 players private void makePlayers() { int rowCount = 1; if (NumberOfPlayers > 2) rowCount = 2; players = new Player[NumberOfPlayers]; for (int i = 0; i < players.Length; i++) { int xSize = GameRef.Window.ClientBounds.Width / 2; int ySize = GameRef.Window.ClientBounds.Height / rowCount; int col = i % rowCount; int row = i / rowCount; int xPoint = 0 + xSize * row; int yPoint = 0 + ySize * col; Viewport viewport = new Viewport(xPoint, yPoint, xSize, ySize); Vector2 playerPosition = new Vector2(viewport.TitleSafeArea.X + viewport.TitleSafeArea.Width / 2, viewport.TitleSafeArea.Y + viewport.TitleSafeArea.Height / 2); players[i] = new Player(playerPosition, playerSprites[i], GameRef, viewport); } //players[1].Keyboard = true; } public override void Draw(GameTime gameTime) { base.Draw(gameTime); foreach (Player player in players) { GraphicsDevice.Viewport = player.PlayerCamera.ViewPort; GameRef.spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, null, null, null, player.PlayerCamera.Transform); map.Draw(GameRef.spriteBatch); // Draw the Player player.Draw(GameRef.spriteBatch); // Draw UI screen elements GraphicsDevice.Viewport = Viewport; ControlManager.Draw(GameRef.spriteBatch); GameRef.spriteBatch.End(); } } the player's initialize and draw methods are like so internal void Initialize() { this.score = 0; this.angle = (float)(Math.PI * 0 / 180);//Start sprite at it's default rotation int width = utils.scaleInt(picture.Width, imageScale); int height = utils.scaleInt(picture.Height, imageScale); this.hitBox = new HitBox(new Vector2(centerPos.X - width / 2, centerPos.Y - height / 2), width, height, Color.Black, game.Window.ClientBounds); playerCamera.Initialize(); } #region Methods public void Draw(SpriteBatch spriteBatch) { //Console.WriteLine("Hitbox: X({0}),Y({1})", hitBox.Points[0].X, hitBox.Points[0].Y); //Console.WriteLine("Image: X({0}),Y({1})", centerPos.X, centerPos.Y); Vector2 orgin = new Vector2(picture.Width / 2, picture.Height / 2); hitBox.Draw(spriteBatch); utils.DrawCrosshair(spriteBatch, Position, game.Window.ClientBounds, Color.Red); spriteBatch.Draw(picture, Position, null, Color.White, angle, orgin, imageScale, SpriteEffects.None, 0.1f); } as I said I think I'm gonna need to do something with the render position but I'm to entirely sure what or how it would be elegant to say the least

    Read the article

  • How much effort is involved in moving a WordPress site to a private server? [on hold]

    - by Alan
    I work in tech, but am on the business side. I have a WordPress site that I would like to move to a personal server and associate with a new domain name. I already have a server (actually, a friend is letting me use his) and the domain name. A friend-of-a-friend, who claims to be an IT pro, has agreed to help, but now is asking for what feels like a lot of money for what he says is a pretty time-intensive job. This doesn't sound right to me, so I thought I would ask here: Would it take months or even days to move the content, and why would it have to be moved in stages? The blog currently uses a basic template and has about 1000 posts. How much effort is really involved in moving a WordPress site from one server to another? Can anyone explain the process? Would it just make more sense to point the domain name at the existing WordPress blog, and pay the nominal yearly fee? I appreciate any answers you can provide.

    Read the article

  • Nginx proxy to Apache - resolve HTTP ORIGIN

    - by Fratyr
    I have a server setup with nginx serving static content and proxy all PHP/dynamic requests to apache on 127.0.0.1 I'm building an API for my databases, and I need to allow clients by their origin (domain name), rather than just IP. Based on CORS rules. So when I send an HTTP header header("Access-Control-Allow-Origin: www.client-requesting.myapi.com"); from my API server, I have to tell it which origin I allow, otherwise client side requests won't work to my API due to same-origin policy. The question is how can I know which domain name (if any) called my API? What should be the nginx and apache configuration to pass the origin parameter? I tried to google, and all I found is some possible solution with mod_rpaf, but I wanted to be sure. Thanks!

    Read the article

  • Homepage not showing on Google

    - by MIke Mayberry
    About six weeks ago my homepage (mayberrykayakingdotcodotuk) disappeared from the google organic search for "kayaking pembrokeshire" despite it having been number 2 within a few weeks of it's launch last summer. My previous site (www.mikemayberrykayakingdotcodotuk) had been 2nd for about six years and has 301 redirects for all pages to the new site. Google toolbar still rates the homepage as 3/10 and the domain is still showing in search results, just not the homepage. A little research suggests that this is most likely to be due to an issue with google treating two pages as identical content (one with www. and one with not) since the changes in their algorithms around that time and that the way to fix this is to add some code somewhere. This makes sense to me as my print advertising doesn't have the www part of the address. I have cpanel access but a limited knowledge on web coding, having picked things up as I've gone along and paid for designers etc., when needed. Would someone be able to let me know where I have to go to add the code and what code I need to add to redirect the crawlers to one page? Or is there another issue that is causing this? Thanks in advance.

    Read the article

  • IIS: redirect everything to another URL, except for one Directory

    - by DrStalker
    I have an IIS server (IIS 6, Win 2003) that hosts the site http://www.foo.com. I want any request to http://foo.com (no matter what path/filename is used) to redirect to http://www.bar.org/AwesomePage.html UNLESS the request is for http://www.foo.com/specialdir, in which case the HTML files in the local directory specialdir should be used. The problem I have is once the redirect is set it also affects /specialdir - even if I right click on that directory and select "content should come from ... local directory" that change does not take effect, and the directory still shows as redirecting to http://www.bar.org/AwesomePage.html. The same thing happens if I try to set individual files to load from the local system instead of redirecting - IIS gives no error, but the change does not take effect and the files still show as being redirected. How can I set specialdir to override the redirection to the new URL?

    Read the article

  • Rules Manager and Expression Filter getting removed

    - by Mike Dietrich
    I doubt that many people are using the Oracle features "Rules Manager" and "Expression Filter" as usually people handle these things (such as ensuring that a zip code or a car number plate has a certain format) within the application code and not inside the database. Oracle Beehive for instance uses that just on the side.  Anyway, just learned today that Rules Manager and Expression Filter components will get removed once our next database release most likely called Oracle Database 12c will get released. So before upgrading to Oracle Database 12c you can remove EXF and RUL components (SELECT COMP_ID FROM DBA_REGISTRY WHERE COMP_ID IN ('EXF','RUL'); ). You'd simply do that by executing the following script before upgrade:SQL> @?/rdbms/admin/catnoexf.sqlThis will clean up Rules Manager and Expression Filter components inside the database. You could run ?/rdbms/admin/catnorul.sql before but I believe catnoexf.sql will clean up everything already. And you'll find all this information plus guidelines for migration of existing content in MOS Note: 1233535.1 - Obsolescence Notice: Rules Manager and Expression Filter Features of Oracle Database -M.

    Read the article

  • Are there ways to write php/python code to run as hooks in the Apache Request Processing pipeline?

    - by SB
    Does anybody know of any modules that provide the functionality to write python or PHP code to run as hooks in the Apache request processing pipeline? For instance, mod_perl lets me write PerlModules, which can contain handlers for the header parsing phase, content delivery, and even filters. I would like to do something similar in other scripting languages. I could write it in C, but the goal is to deploy a module that would work across a number of systems. If I deliver it as binary in C, then it would require 64/32-bit versions and some other issues. With perl, I can just require certain modules installed and mod_perl2.

    Read the article

  • Select Data From XML in MS SQL Server (T-SQL)

    - by Doug Lampe
    So you have used XML to give you some schema flexibility in your database, but now you need to get some data out.  What do you do?  The solution is relatively  simple:   DECLARE @iDoc INT /* Stores a pointer to the XML document */ DECLARE @XML VARCHAR(MAX) /* Stores the content of the XML */   set @XML = (SELECT top 1 Xml_Column_Name FROM My_Table where Primary_Key_Column = 'Some Value')   EXEC sp_xml_preparedocument @iDoc OUTPUT, @XML   SELECT * FROM OPENXML(@iDoc,'/some/valid/xpath',2)                      WITH (output_column1_name varchar(50)  'xml_node_name1',                                                     output_column2_name varchar(50)  'xml_node_name2')   EXEC sp_xml_removedocument @iDoc   In this example, the XML data would look something like this:   <some>   <valid>     <xpath>       <xml_node_name1>Value1</xml_node_name1>       <xml_node_name2>Value2</cml_node_name2>     </xpath>   </valid> </some>   The resulting query should give you this:   output_column1_name    output_column2_name ------------------------------------------ Value1                 Value2   Note that in this example we are only looking at a single record at a time.  You could use a cursor to iterate through multiple records and insert the XML data into a temporary table.

    Read the article

  • Lightweight tool for viewing raw HTTP messages?

    - by rewbs
    Hi, I'm investigating differences in behaviour between a couple of Web servers. I need to see raw response data from the servers (i.e. before the response is de-chunked if it has "Transfer-Encoding:chunked" and before it is decompressed if it has "Content-Encoding:gzip"). I can find plenty of simple HTTP client that nearly do what I need (e.g. Poster, RESTClient), but they tend to decode the response one step too far. Network analysers like Wireshark give me what I need but are a bit heavyweight. Telnet is my best bet so far, but is a bit too simplistic (actions like capturing data or entering requests are a bit laborious). Can anyone recommend a good, lightweight tool for sending / viewing the raw data that constitute HTTP messages? Edit: I should add that I'm on Windows. Also, the tool would need to work both with remote and local servers.

    Read the article

< Previous Page | 764 765 766 767 768 769 770 771 772 773 774 775  | Next Page >