Search Results

Search found 20157 results on 807 pages for 'friendly url'.

Page 40/807 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • CSS-Friendly Menu adapter that emits the same markup as .NET 4.0

    - by Joe
    For .NET 2.x/3.x there exists a CSS-Friendly Adapter on CodePlex that emits markup for an ASP.NET Menu Control as an ul. The .NET 4.0 Menu control will also emit an ul, but the CSS class names are different from those emitted by the CSS-Friendly Adapter 1.0 on CodePlex. In the interests of having a single version of CSS for .NET 2/3/4 sites, I want to create a version of the CSS-Friendly menu adapter that emits the same markup as the .NET 4.0 Menu control. Before doing so, I thought I'd ask here to see if it's already been done, so I don't reinvent the wheel. Anyone?

    Read the article

  • How to prevent users to change url parameter in PHP?

    - by Sachin
    I am developing a site where I am sending parameters like ids by url. I have used urlencode and base64encode to encode the parameters. My problem is that how can I prevent the users or hackers to play with url paramenters Or give access only if the parameter value is exist in database? I have at least 2 and at most 5 parameter in url so is this feasible to check every parameter is exist in database on every page? Thanks

    Read the article

  • Yet another URL prefix regex question (to be used in C#).

    - by Hamish Grubijan
    Hi, I have seen many regular expressions for Url validation. In my case I want the Url to be simpler, so the regex should be tighter: Valid Url prefixes look like: http[s]://[www.]addressOrIp[.something]/PageName.aspx[?] This describe a prefix. I will be appending ?x=a&y=b&z=c later. I just want to check if the web page is live before accessing it, but even before that I want to make sure that it is properly configured. I want to treat bad url and host is down conditions differently, although when in doubt, I'd rather give a host is down message, because that is an ultimate test anyway. Hopefully that makes sense. I guess what I am trying to say - the regex does not need be too aggressive, I just want it to cover say 95% of the cases. This is C# - centric, so Perl regex extensions are not helpful to me; let's stick to the lowest common denominator. Thanks!

    Read the article

  • can anyone help me with this javascript for validating url in aspx?

    - by orrep
    heres my code - function Validate_URL(url) { var iurl = url.value; var v = new RegExp(); v.compile("/^(((ht|f){1}(tp:[/][/]){1})|((www.){1}))[-a-zA-Z0-9@:%_+.~#?&//=]+$/;"); if (!v.test(iurl.value)) { url.style.backgroundColor = 'yellow'; } return true; } no matter what i put in url, say http://www.abc.com/newpage.html, it returns false. how come?

    Read the article

  • Is this a valid url parameter in jquery.ajax()?

    - by udaya
    Is this a valid url parameter in jquery.ajax(), <script type="text/javascript"> $(document).ready(function() { getRecordspage(); }); function getRecordspage() { $.ajax({ type: "POST", url: "http://localhost/codeigniter_cup_myth/index.php/adminController/mainAccount", data: "", contentType: "application/json; charset=utf-8", global:false, async: false, dataType: "json", success: function(jsonObj) { alert(jsonobj); } }); } </script> The url doesn't seem to go to my controller function...

    Read the article

  • Image loader cant load my live image url

    - by Bindhu
    In my application i need to load the images in list view, when using locale(ip ported url) then no problem all images are loading properly, But when using live url then the images are not loading, My image loader class: public class ImageLoader { MemoryCache memoryCache = new MemoryCache(); FileCache fileCache; private Map<ImageView, String> imageViews = Collections .synchronizedMap(new WeakHashMap<ImageView, String>()); ExecutorService executorService; public ImageLoader(Context context) { fileCache = new FileCache(context); executorService = Executors.newFixedThreadPool(5); } final int stub_id = R.drawable.appointeesample; public void DisplayImage(String url, ImageView imageView) { imageViews.put(imageView, url); Bitmap bitmap = memoryCache.get(url); if (bitmap != null) imageView.setImageBitmap(bitmap); else { Log.d("stub", "stub" + stub_id); queuePhoto(url, imageView); imageView.setImageResource(stub_id); } } private void queuePhoto(String url, ImageView imageView) { PhotoToLoad p = new PhotoToLoad(url, imageView); executorService.submit(new PhotosLoader(p)); } private Bitmap getBitmap(String url) { File f = fileCache.getFile(url); // from SD cache Bitmap b = decodeFile(f); if (b != null) return b; // from web try { Bitmap bitmap = null; URL imageUrl = new URL(url); HttpURLConnection conn = (HttpURLConnection) imageUrl .openConnection(); conn.setConnectTimeout(30000); conn.setReadTimeout(30000); conn.setInstanceFollowRedirects(true); InputStream is = conn.getInputStream(); BufferedInputStream bis = new BufferedInputStream(is, 81960); BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inJustDecodeBounds = true; OutputStream os = new FileOutputStream(f); Utils.CopyStream(bis, os); os.close(); bitmap = decodeFile(f); Log.d("bitmap", "Bit map" + bitmap); return bitmap; } catch (Exception ex) { ex.printStackTrace(); return null; } } // decodes image and scales it to reduce memory consumption private Bitmap decodeFile(File f) { try { try { BitmapFactory.Options o = new BitmapFactory.Options(); o.inJustDecodeBounds = true; BitmapFactory.decodeStream(new FileInputStream(f), null, o); final int REQUIRED_SIZE = 200; int scale = 1; while (o.outWidth / scale / 2 >= REQUIRED_SIZE && o.outHeight / scale / 2 >= REQUIRED_SIZE) scale *= 2; BitmapFactory.Options o2 = new BitmapFactory.Options(); o2.inSampleSize = scale; return BitmapFactory.decodeStream(new FileInputStream(f), null, o2); } catch (FileNotFoundException e) { } finally { System.gc(); } return null; } catch (Exception e) { } return null; } // Task for the queue private class PhotoToLoad { public String url; public ImageView imageView; public PhotoToLoad(String u, ImageView i) { url = u; imageView = i; } } class PhotosLoader implements Runnable { PhotoToLoad photoToLoad; PhotosLoader(PhotoToLoad photoToLoad) { this.photoToLoad = photoToLoad; } @Override public void run() { if (imageViewReused(photoToLoad)) return; Bitmap bmp = getBitmap(photoToLoad.url); memoryCache.put(photoToLoad.url, bmp); if (imageViewReused(photoToLoad)) return; BitmapDisplayer bd = new BitmapDisplayer(bmp, photoToLoad); Activity a = (Activity) photoToLoad.imageView.getContext(); a.runOnUiThread(bd); } } boolean imageViewReused(PhotoToLoad photoToLoad) { String tag = imageViews.get(photoToLoad.imageView); if (tag == null || !tag.equals(photoToLoad.url)) return true; return false; } // Used to display bitmap in the UI thread class BitmapDisplayer implements Runnable { Bitmap bitmap; PhotoToLoad photoToLoad; public BitmapDisplayer(Bitmap b, PhotoToLoad p) { bitmap = b; photoToLoad = p; } public void run() { if (imageViewReused(photoToLoad)) return; if (bitmap != null) photoToLoad.imageView.setImageBitmap(bitmap); else photoToLoad.imageView.setImageResource(stub_id); } } public void clearCache() { memoryCache.clear(); fileCache.clear(); } My Live Image url for Example: https://goappointed.com/images_upload/3330Torana_Logo.JPG I have referred google but no solution is working, Thanks a lot in advance.

    Read the article

  • Yelp Like Adjective Rating System

    - by clifgray
    I am building a website that has users list their outdoor adventures (skydiving, surfing, base jumping, etc) and the other people can comment on them. I want to have a rating system like Yelp which has "Useful, Funny, or Cool" but with different adjectives. I have thought of a few such as Daring, Adventurous, and Unique but I wanted to get some feedback on what a few other good adjectives would be. Also does anyone have experience with other such systems or advice for better systems? Primarily I just want the user to have somewhat more descriptive voting options than u and down or 1 though 5.

    Read the article

  • Do premium domain names help us with other languages too?

    - by Fabio Milheiro
    It's commonly known that premium domains with one or two relevant keywords may help us improve our rankings in SERPS. But would it be possible that an english premium domain, for example gold.com (no, it's not mine) also helps to drive more non-english traffic (I'm talking about non-english pages ob)? Trying to make my question clear: Let's suppose that I have an english premium domain with a page like this: gold dot com/post/123/gold-is-yellow And decide to have a spanish, portuguese or french version of the site with pages like: gold dot com/es/post/123/el-oro-es-amarillo gold dot com/pt/post/123/o-ouro-e-amarelo gold dot com/fr/post/123/fsdfsdfsdf The fact that my english domain is a premium one and highly relevant for english terms, will also help me to achieve good rankings for non-english searched terms like: oro (spanish) or ouro (portuguese)?

    Read the article

  • Should page contents same all time for SEO?

    - by Ahmet Kemal
    Hi, I have a frequently updated website. That's why page contents change frequently. I mean the items that are on 1'st page become on 2'nd page a day after. Similarly 192'nd page which is my last page becomes 193'rd page a day after. So Google finds different content on a specific page than its previous visit. Is it bad for SEO? My main page is http://www.yemeklog.com/home/ and as you see below page pagination pages are like http://www.yemeklog.com/home/10 or http://www.yemeklog.com/home/192 . Website content is about food receipes at all pages. What you think about it?

    Read the article

  • Why does SEO based code tips not appear to affect ranking?

    - by Ben
    I've been researching various methods for SEO where pages have precise titles, keywords are highlighted with h tags and tick the many boxes stated in good page mark up for SEO. However when looking at some top ranked search sites on google for key terms they have terrible SEO based mark up. Really long page titles, no tags, limited appearance of keywords in the text and so on. SEO analysis services rate them lower than other sites, yet these sites rank really high. Even with a low number of back-links they are high, so I don't understand how these sites earn the position when they appear inferior to those below them which have better mark up and links. I don't want to cause trouble my mentioning sites or keywords etc. but looking in google at 'executive search' the roughly 5th placed site makes no sense why it should be highly rank, especially with all the added .swfs. The same applies for the top of 'Japan Executive Search'. My main point is that these sites seem to not have all the important structural rules stated in seo page rating applications and general suggested best practice, nor do they show large back-links. It makes me feel like there is no point bothering to write decent mark up if it really doesn't matter. Can anyone explain how sites with such mark-up, and low back-links can outrank well written and structured sites with greater linkage? Sorry if this is a fuzzy question, I want to avoid singling out any sites for example, but it really has me perplexed that sites which appear to ignore the suggested best practices rank so well.

    Read the article

  • Page Titles - Including gender of a fashion product in page titles?

    - by Cedric
    I need a bit of help to decide whether it is worth including gender in page titles. In the webmaster tools: I looked at our search queries that include "women", and they account for 9% of our total search queries for the site. I am wondering if it is the right way assess the benefit of including "woman" or "men" in page titles, looking at it with existing results pointing to us already? Is there another tool that I can check the actual queries that may not include us in search results? Like google insights maybe? http://www.google.com/insights/search/#q=shoes%2Cshoes%20for%20women&cmpt=q So it looks like 1.1% of searches for "shoes" are also "shoes for women" is that correct? As a direct comparison, doing the same analysis on our own search queries, I get 1.8% when comparing "shoes for women" to "shoes" Implementing this automation would probably affect 99% of our site if not more, splitting it in 2 segments (one portion of page titles including "women" and the other including "men") Will doing so create a massively repetitive keyword throughout the site, hurting SEO? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35624 (see "Avoid repeated or boilerplate titles.")

    Read the article

  • Creating country specific twitter/facebook accounts

    - by user359650
    I see many companies that have an international presence trying to localize their social media presence by creating country or language specific accounts. However some seemed to have done so without following a consistent pattern, one example being the World Wildlife Fund when you look at their Twitter accounts: World_Wildlife : verified account with 200K followers WWF : main account with 800K followers www_uk : lower case with underscore between WWF and country indicator WWFCanada : upper case with country indicator attached to WWF ... I am planning to build a website which hopefully will grow global and would like to avoid this sort of inconsistencies. Also, I was comparing what Twitter and Facebook allow in their username and found out that they don't allow the same characters to be used (e.g. for instance that the former doesn't allow . whereas the latter does) making difficult to ensure consistency across social networks. Hence my questions: Are there known naming schemes for creating localized Twitter and Facebook accounts while maintaining a certain consistency between them (best effort)? Are there any researches out there that have proven whether some schemes were better than others in terms of readability and/or SEO?

    Read the article

  • Risk for hosting SEO links. [closed]

    - by mconnors
    Possible Duplicate: SEO drawbacks of having paid links without nofollow? A few companies are will to pay us for links on our homepage. I am assuming these are legitimate sites although they are unrelated to our sites content. Would google penalize our site for having these links? We definitely need the revenue and we view this as selling advertising space- but I don't want to kill our good ranking. Does anyone have any insight, is it possible to ask google directly?

    Read the article

  • Correctly indexing multiple domains with same content in Google and others

    - by AJweb
    I have a client with a dozen territorial domains, like mydomain.co.uk, mydomain.fr, mydomain.de, etc Most of these domains hold a different language of the same dynamic content (shop), but some, like co.uk and .com, have the same language and content, except for some content customized to each country/domain in the front page, contact and other pages. I am aware that we should use the canonical meta tag to mark those duplicated contents, but, we want the co.uk to be present in UK ( indexed in google.co.uk ) and the .com to be present in US and other countries, for example, or least that is the goal. Is there anything we can do to "help" google determine the geographical meaning of each domain? If we mark with canonical tag the .com and co.uk sites, do you know how google will decide which one to show on a given search?

    Read the article

  • jQuery/AJAX on old Computers/Browsers

    - by Andresch Serj
    I am working on a plattform that will have a lot of users in the so called "developing countries". So many of them will be using old computers and old browsers in tiny internet cafes. We want to make sure to give them a good user Experience and make sure the website loads as fast as possible. Problem is, that while you can save a lot of requeasts and time, using jQuery/AJAX, it also brings along a lot of Problems: - Will the Computers be powerfull enough to deal with the client side scripts? - Will the old Browsers handle jQuery? Does anyone have any experience with these sort of problems or might know of some sort of article on the topic?

    Read the article

  • How to manually list set of urls for search engines to index

    - by MarutiB
    So I have created a video website which has thousands of videos and thousands of videos get added to it on a daily basis. Here is my problem :- I have created a website which basically loads the skeleton in html and puts all the content through javascript and Ajax. The problem is search engines aren't going anywhere except for the home page. Is there a way say in robots.txt where i give a link to a single html which has links to all these videos ? I agree my site is not accessible for a non-javascript user but stats show that this ratio is very low ( 0.2 % ). Is there a way I can still keep the complete AJAX website and still get each individual videos listed on google ?

    Read the article

  • How do I make the home page of the website to come up in the rankings than the internal pages? [closed]

    - by Shahab
    Possible Duplicate: What are the best ways to increase your site's position in Google? Suppose I have a website, e.g. www.example.com that comes at number 6 on the Google search rankings. But the internal pages of the website i.e. www.example.com/index.php?a=1&b=2 or www.example.com/index.php comes at number 2 of the rankings. How would I make my prime domain name www.example.com to come at the top of the list ? Any guidance would be appreciated.

    Read the article

  • how to capture the clicked url on UIWebView

    - by user262325
    Hello everyone I hope to capture the clicked url on an UIWebView - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType { if (navigationType == UIWebViewNavigationTypeLinkClicked) { NSURL *URL = [request URL]; NSString *s=[URL absoluteString]; } but I noticed that this URL is not the url which is clicked and will be displayed on UIWebView, normally it is the url of current web page display on UIWebView. Welcome any comment Thanks interdev

    Read the article

  • Data extract from website URL

    - by user2522395
    From this below script I am able to extract all links of particular website, But i need to know how I can generate data from extracted links especially like eMail, Phone number if its there Please help how i will modify the existing script and get the result or if you have full sample script please provide me. Private Sub btnGo_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnGo.Click 'url must be in this format: http://www.example.com/ Dim aList As ArrayList = Spider("http://www.qatarliving.com", 1) For Each url As String In aList lstUrls.Items.Add(url) Next End Sub Private Function Spider(ByVal url As String, ByVal depth As Integer) As ArrayList 'aReturn is used to hold the list of urls Dim aReturn As New ArrayList 'aStart is used to hold the new urls to be checked Dim aStart As ArrayList = GrabUrls(url) 'temp array to hold data being passed to new arrays Dim aTemp As ArrayList 'aNew is used to hold new urls before being passed to aStart Dim aNew As New ArrayList 'add the first batch of urls aReturn.AddRange(aStart) 'if depth is 0 then only return 1 page If depth < 1 Then Return aReturn 'loops through the levels of urls For i = 1 To depth 'grabs the urls from each url in aStart For Each tUrl As String In aStart 'grabs the urls and returns non-duplicates aTemp = GrabUrls(tUrl, aReturn, aNew) 'add the urls to be check to aNew aNew.AddRange(aTemp) Next 'swap urls to aStart to be checked aStart = aNew 'add the urls to the main list aReturn.AddRange(aNew) 'clear the temp array aNew = New ArrayList Next Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String) As ArrayList 'will hold the urls to be returned Dim aReturn As New ArrayList Try 'regex string used: thanks google Dim strRegex As String = "<a.*?href=""(.*?)"".*?>(.*?)</a>" 'i used a webclient to get the source 'web requests might be faster Dim wc As New WebClient 'put the source into a string Dim strSource As String = wc.DownloadString(url) Dim HrefRegex As New Regex(strRegex, RegexOptions.IgnoreCase Or RegexOptions.Compiled) 'parse the urls from the source Dim HrefMatch As Match = HrefRegex.Match(strSource) 'used later to get the base domain without subdirectories or pages Dim BaseUrl As New Uri(url) 'while there are urls While HrefMatch.Success = True 'loop through the matches Dim sUrl As String = HrefMatch.Groups(1).Value 'if it's a page or sub directory with no base url (domain) If Not sUrl.Contains("http://") AndAlso Not sUrl.Contains("www") Then 'add the domain plus the page Dim tURi As New Uri(BaseUrl, sUrl) sUrl = tURi.ToString End If 'if it's not already in the list then add it If Not aReturn.Contains(sUrl) Then aReturn.Add(sUrl) 'go to the next url HrefMatch = HrefMatch.NextMatch End While Catch ex As Exception 'catch ex here. I left it blank while debugging End Try Return aReturn End Function Private Overloads Function GrabUrls(ByVal url As String, ByRef aReturn As ArrayList, ByRef aNew As ArrayList) As ArrayList 'overloads function to check duplicates in aNew and aReturn 'temp url arraylist Dim tUrls As ArrayList = GrabUrls(url) 'used to return the list Dim tReturn As New ArrayList 'check each item to see if it exists, so not to grab the urls again For Each item As String In tUrls If Not aReturn.Contains(item) AndAlso Not aNew.Contains(item) Then tReturn.Add(item) End If Next Return tReturn End Function

    Read the article

  • Now Customers Can Actually Locate Your Resources with URL Rewriter 2.0 RTW

    - by The Official Microsoft IIS Site
    Today, Microsoft announced the final release of IIS URL Rewriter 2.0 RTW . Now the first reason might be obvious why you would want to rewrite a URL – when you are at a cocktail party with loud music and tasty appetizers and a potential customer asks you where they can get more info on your snazzy new idea. And you proudly blurt out next to their ear over the roar of the bass, “Just go to h-t-t-p colon slash slash w-w-w dot my new idea dot com slash items dot a-s-p-x question mark cat ID equals new...(read more)

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • Redirect/rewrite dynamic URL to sub-domain and create DNS for subdomain

    - by Abdul Majeed
    I have created an application in PHP, I would like to re-direct the following URL to corresponding sub-domain. Dynamic URL pattern: http://mydomain.com/mypage.php?user_name=testuser I wish to re-direct this to the corresponding sub domain: http://testuser.mydomain.com/ How do I create a rewrite rule for this purpose? How do I register DNS for sub-domain without using CPANEL? (I want to activate sub-domain when the user registers to the system.)

    Read the article

  • Mod Rewrite - url rewriting

    - by modrewriteNewbie
    I am very new to mod rewrite. I need to redirect any user with "citzenhawk" parameter in their url to my url for example http://www.mywebsite.com/?sc=CX12N003&cm_mmc=affiliate--citizenhawk--nooffer-_-na&prfc=5&clickid=0004c845fa9a87050a4277221a003262 should result into http://www.mywebsite.com/ Here are my rewrite conditions: RewriteCond %{QUERY_STRING} (&|^)cm_mmc=(.)citizenhawk(.)(&|$)$ RewriteRule ^/rrs/ [NC,R=302,L] Where am I going wrong? Is my RewriteCond wrong?

    Read the article

  • Upload File From URL

    - by Ryan Naddy
    I have been using Windows for a while, and with it there is a feature when you want to upload a photo (for example) to a website, you click on the "Choose File" in Chrome to upload a photo, a "File Explorer" opens, and instead of selecting a file from the hard drive you can paste a URL into the "File Explorer" and press open and it will download the file from the web to your temporary files, and use it to be uploaded. Is there any way I can do that in Ubuntu 12.10? Here is the windows example:Upload from url via File Explorer

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >