Search Results

Search found 5530 results on 222 pages for 'nested urls'.

Page 60/222 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • For business people to manage, keep binary images in MySQL or just the urls?

    - by Michael Mao
    Hello everyone: I am working on a task to enable image uploading and auto-scaling(from full sized to thumbnail) by jQuery & PHP. I can naturally come up with two approaches : First, store both images as binary objects directly into MySQL; Second, store only urls to the images and keep the images somewhere on server. The images are for everyone to view, so there are no security restrictions, as far as I know. Personally I don't have any preference, however, at the end of the day, it is the business people that are going to manage the images as part of the system(CRUD). So I am wondering which seems to be a bit better for them? Of course I am building a easy-to-use, visualize web interface for the staff to control the process, but I am not sure if that is enough. Lessons told me that if I don't think for the future and seek the most flexible approach, the I will probably screw myself sooner or later. PS. The following link is what I've found so far, which is pretty cool, no flash involved :) Andrew Valum's ajax image upload jQuery plugin

    Read the article

  • Rails: Obfuscating Image URLs on Amazon S3? (security concern)

    - by neezer
    To make a long explanation short, suffice it to say that my Rails app allows users to upload images to the app that they will want to keep in the app (meaning, no hotlinking). So I'm trying to come up with a way to obfuscate the image URLs so that the address of the image depends on whether or not that user is logged in to the site, so if anyone tried hotlinking to the image, they would get a 401 access denied error. I was thinking that if I could route the request through a controller, I could re-use a lot of the authorization I've already built into my app, but I'm stuck there. What I'd like is for my images to be accessible through a URL to one of my controllers, like: http://railsapp.com/images/obfuscated?member_id=1234&pic_id=7890 If the user where to right-click on the image displayed on the website and select "Copy Address", then past it in, it would be the SAME url (as in, wouldn't betray where the image is actually hosted). The actual image would be living on a URL like this: http://s3.amazonaws.com/s3username/assets/member_id/pic_id.extension Is this possible to accomplish? Perhaps using Rails' render method? Or something else? I know it's possible for PHP to return the correct headers to make the browser think it's an image, but I don't know how to do this in Rails... UPDATE: I want all users of the app to be able to view the images if and ONLY if they are currently logged on to the site. If the user does not have a currently active session on the site, accessing the images directly should yield a generic image, or an error message.

    Read the article

  • Apahce - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory McCann
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • Apache - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • How to prevent mod_proxy from rewriting redirects into absolute URLs?

    - by Yang
    I have: nginx (port 80) reverse-proxying to apache2 (port 88) reverse-proxying to a web app (port 5001). However, when the web app responds with a redirect like Location: /foo, apache2 rewrites this into Location: http://host.com:88/sub/foo, even though port 88 is publicly inaccessible. I'd like it to just redirect to the relative URL Location: /sub/foo. Any ideas? My apache config (using mod_proxy_http, mod_proxy_html, mod_substitute): <Location /notes/> Allow from all ProxyPass http://127.0.0.1:5001/ SetOutputFilter proxy-html ProxyPassReverse / ProxyHTMLURLMap / /notes/ RequestHeader unset Accept-Encoding AddOutputFilterByType SUBSTITUTE application/atom+xml Substitute "s|127.0.0.1:5001|host.com/notes|" </Location>

    Read the article

  • Google not indexing new forum

    - by Tom Gullen
    We installed a new forum a few months ago now. The URL is: https://www.scirra.com/forum I've 301'd the old topics/threads, as well as included all the new URLs in the sitemap. Yet they still are not appearing. Webmaster tools is showing: 139,512 URLs submitted 50,544 URLs indexed And has been stuck there for quite some time. A massive drop in indexed pages since we updated the forum as well: Any help much appreciated

    Read the article

  • In Google Webmaster Tools we have 3 sitemaps attributed to 1 domain

    - by Frank
    Thanks for you advice and help ahead of time I have a website that has been on the internet for almost 10 years created in "Microsoft Frontpage" with over 900 pages. Currently in Google Webmaster tools it shows up as 2 domains and 3 sitemaps: http://www.example.com example.com hostedsitmaps.com Furthermore, since we were having hard to placing the xml sitemap on our site(Frontpage Issues) we decided to hire pro-sitemaps.com to create, host and upload the xml file which they did. Thus, I have another site hostedsitemaps.com on our webmaster tools for the site. Hosted sitemaps shows: 900 urls submitted 800 Indexed. Crawl errors and Search queries: No data available http://www.example.com shows: 889 URLs submitted 1 URLs indexed. Crawl Errors: 14 Soft 404 796 Not found Search Queries: 8104 example.com shows: 889 URLs submitted 1 URLs indexed Crawl errors: 48 Soft 404 91 Not found Search Queries: 8104 My questions and need for help are as followed: 1. Why are our domain based sites in webmaster tools ( example.com and http:www.example.com) showing only 1 URL indexed while the hostedsitemap has 800 indexed? 2. Should we have 3 domains configured for this "one" domain in Google Webmaster tools? 3. Should we eliminate/delete the hostedsitemap from webmaster tools completely and take off that XML sitemap? 4 Does having example.com and http://www.example.com impact web ranking? 5. Any other thoughts or help in this very complicated matter for us. Thanks.

    Read the article

  • Moving from http to https - Google webmaster tools | Bing webmaster tools

    - by user2240778
    I'm moving from http to https for my entire site. The site is currently added to google webmaster tools as www.example.com and all the pages are indexed as http. How do i go about moving to the new https URLs on Google webmaster tools. Do I just submit a updated sitemap which has the https URLs OR Do I add a new site as https://www.example.com and submit the sitemap with https urls? All the http urls are set to redirect to their https counterparts.

    Read the article

  • I run Webmin and I want it to be accessed with two URLs, both using proxypass in apache

    - by user36644
    This is what I am trying to do: NameVirtualHost * <VirtualHost *> ServerName testsite.org ServerAdmin [email protected] DocumentRoot /var/www/ </VirtualHost> <VirtualHost *> ServerName panel.testsite.org ProxyPass / http://panel.testsite.org:10000/ ProxyPassReverse / http://panel.testsite.org:10000/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ </VirtualHost> The problem is that it won't accept the 2nd vhost with the IP 12.34.56.78 because it says one already exists. panel.newsite.com and newsite.com have the same IP...so I am not sure how I can make it so that only the URL "panel.newsite.com" will get proxypassed to port 10000 but no other URL on newsite.com

    Read the article

  • How to let mod_wsgi only handle certain URLs under Apache?

    - by Frederik
    I have a Django app that handles "/admin/" and "/myapp/". All the other requests should be handled by Apache. I've tried using LocationMatch but then I'd have to write a negative regex. I've tried WSGIScriptAlias with the /admin/ prefix but then the wsgi_handler receives the request with the /admin/ part cut off. Is there a cleaner way to make mod_wsgi only handle certain requests?

    Read the article

  • Make shortened and long urls play together on the same domain (RewriteRule).

    - by Renato Renato
    Long story short, I want to have both example.com/aJ5 and example.com/any-other-url working together. I'm using apache and not very good at writing regex. I have already a global RewriteRule which sends everything to the app entry point. What I need is to tell apache if length($path) is <= 5 chars then rewrite to another location. I know I can use {1,5} like syntax in regex, but don't really know if it's what I'm looking for. I'd like to implement this at web-server level rather than php level. Any help is appreciated.

    Read the article

  • Force google to reindex

    - by Matthias
    I changed the structure of my urls. The pages are indexed by google and have the following structure http://mypage.com/myfolder/page.apsx The new structure is http://mypage.com/page.aspx Now all urls that google knows are wrong. How can I tell google to reindex and that the structure has changed? Internally I redirect in ASP.NET when the url contains the myfolder by I want google to update the urls.

    Read the article

  • How do I fix the Google Webmaster Tools warning: "URL not followed?"

    - by user3611500
    A few days after submitting my sitemap to Google, I received this warning: When we tested a sample of URLs from your Sitemap, we found that some URLs redirect to other locations. We recommend that your Sitemap contain URLs that point to the final destination (the redirect target) instead of redirecting to another URL. The example URL Google gave me is http://iketqua.net/?_escaped_fragment_=CIDTKT/mien-trung/xo-so-kon-tum I checked all possible things that I could think of, but still can't figure out what the warning is about! My sitemap: http://iketqua.net/sitemap.xml

    Read the article

  • How to configure to URLs for One Server using wildcard supported certificates?

    - by Amit
    Hi, We have wildcard supported certificate installed in our production environment. One of our client wants his name to appear in the URL (e.g. companyname.sitename.net). How we should facilitate this? Do we need to make any entries for this in DNS? If yes can you please let me know about it? I need to set this up before Fridat PST, any help in this is highly appriciated. Thanks.

    Read the article

  • once VPNed into pfSense, unable to hit the public URLs of my websites - they are routed to the pfSense box

    - by Sean
    I have a pfSense box setup as the firewall/router/VPN appliance at my colo. Once I VPN into the colo (either pptp or openvpn, pptp preferred due to multiple clients and ease of configuration), I am able to hit all my servers by their private 10.10.10.x ip and am able to browse the public internet without issue. When I try and hit the URL of a domain hosted by one of my servers, I am prompted for credentials. If I login using the pfSense credentials, I'm connected to pfSense as if I'd used it's internal IP. If I hack my hosts file to point url - server private IP it works fine, but this is obviously not a good solution. To recap: not connected to VPN - www.myurl.com works connected to VPN - www.myurl.com never makes it to the correct server, but is sent only to the pfSense box I'm sure it's something small that I've missed in the pfSense config.

    Read the article

  • I've changed my URL schema. How do I tell Google to index the new schema and forget the old one?

    - by growse
    I had a site where the urls were constructed like this /index.php/Topic /index.php/AnotherTopic These were indexed in google, and search results returned that pointed to these. However, I've recently replatformed that site, and reconfigured it so the above urls would be: /index.php?title=Topic /index.php?title=AnotherTopic The original urls are returning 404s. The site is linking to the correct URL schema internally, but Google is retaining the original schema in its search results. I've updated and resubmitted the sitemap which only contains the new schema. Also, Google's webmasters tool is going slightly bananas at the fact there's now a spike in 404 errors in its crawl results. What would be the best approach to get Google to 'forget' about the old schema, and instead index the new schema? Should I try blocking /index.php/ in robots.txt? Should I be returning 301 codes instead of 404 for the original urls?

    Read the article

  • How can I track hits to areas of my web application?

    - by Tyson
    We have a growing web application, and we currently use Google Analytics and Chartbeat to track usage and engagement (although we're open to alternatives). Unfortunately, both are geared towards content-based sites where everything is about the URL. Our URLs contain object IDs, making them less useful independently, and causing us to grow beyond Google Analytics' 50,000 unique URLs per day. How can we track hits to areas of our web application, essentially ignoring parts of the URLs?

    Read the article

  • Can I tell Chrome not to redirect mistyped URLs?

    - by Nathan Long
    If I mistype a URL, Chrome sometimes redirects me to a search. For instance, typing "example_url_i_sometimes_mistype.conm" into the location bar gets me: Your search - example_url_i_sometimes_mistype.conm - did not match any documents. Not only is this annoying, but if the mistyped URL was one on private DNS, I've now just told Google that the domain exists. (A small concern, but bad in principle.) Can I configure Chrome to just show an error and not blab to Google Search about it?

    Read the article

  • Do browsers change URLs of saved bookmarks in response to 301 redirection?

    - by elliot100
    HTTP status code 301 is used to indicate that content has moved permanently, and that the returned URL should be used to access the requested content in future. RFC 2616 says Clients with link editing capabilities ought to automatically re-link references to the request-URI to one or more of the new references returned by the server, where possible. Do any browsers actually implement this and change a bookmark's URL?

    Read the article

  • How can I get rid of the long Google results URLs?

    - by Teifi
    google.com is always shielded by our firewall. When I search something at google.com, a result list appears. Then I click the link, the URL changes to a processed url like: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDcQFjAA&url=http%3A%2F%2Fwww.amazon.com%2F&ei=PE_AUMLmFKW9iAfrl4HoCQ&usg=AFQjCNGcA9BfTgNdpb6LfcoG0sjA7hNW6A&cad=rjt Then my browser is blocked because of google.com I guess. The only useful information in that long like processed URL is http%3A%2F%2Fwww.amazon.com(http://www.amazon.com). My quesitons: What's the meaning of that long like processed URL? Is there a way to remove the header google.com/url?sa.. each time I click the search results?

    Read the article

  • How to remove an entry from Chrome's Remembered URLs from the url bar?

    - by cmcculloh
    I've got a url in Chrome "local.mysite.com" that autopopulates when I start typing "local.my" into the URL bar. Note that this URL DOES NOT EXIST in my browser history (at chrome://history/#e=1&p=0) because it isn't a real site and therefor couldn't ever be successfully visited and therefor never shows up in my history. The URL I want is "local.mysite.com/subdir/". That URL is like 3 down in the suggested results because I keep accidentally hitting "enter" when it auto-suggests the unwanted first URL and thus re-enforcing it's assumption that that is the one I want. How do I get rid of the "local.mysite.com" entry in Chrome's memory?

    Read the article

  • is it okay to use random URLs instead of passwords?

    - by stew
    Is it considered "safe" to use URL constructed from random characters like this? http://example.com/EU3uc654/Photos I'd like to put some files/picture galleries on a webserver that are only to be accessed by a small group of users. My main concern is that the files should not get picked up by search-engines or curious power-users that poke around my site. I've set up an .htaccess file, just to notice that clicking on http://user:pass@url/ links doesn't work well with some browsers/email clients, prompting dialogs and warnings messages that confuse my not-too-computer-savy users.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >