Search Results

Search found 22416 results on 897 pages for 'url validation'.

Page 199/897 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Earthquake Locator - Live Demo and Source Code

    - by Bobby Diaz
    Quick Links Live Demo Source Code I finally got a live demo up and running!  I signed up for a shared hosting account over at discountasp.net so I could post a working version of the Earthquake Locator application, but ran into a few minor issues related to RIA Services.  Thankfully, Tim Heuer had already encountered and explained all of the problems I had along with solutions to these and other common pitfalls.  You can find his blog post here.  The ones that got me were the default authentication tag being set to Windows instead of Forms, needed to add the <baseAddressPrefixFilters> tag since I was running on a shared server using host headers, and finally the Multiple Authentication Schemes settings in the IIS7 Manager.   To get the demo application ready, I pulled down local copies of the earthquake data feeds that the application can use instead of pulling from the USGS web site.  I basically added the feed URL as an app setting in the web.config:       <appSettings>         <!-- USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/ -->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="~/Demo/1day-M2.5.xml" />-->         <add key="FeedUrl"              value="~/Demo/7day-M2.5.xml" />     </appSettings> You will need to do the same if you want to run from local copies of the feed data.  I also made the following minor changes to the EarthquakeService class so that it gets the FeedUrl from the web.config:       private static readonly string FeedUrl = ConfigurationManager.AppSettings["FeedUrl"];       /// <summary>     /// Gets the feed at the specified URL.     /// </summary>     /// <param name="url">The URL.</param>     /// <returns>A <see cref="SyndicationFeed"/> object.</returns>     public static SyndicationFeed GetFeed(String url)     {         SyndicationFeed feed = null;           if ( !String.IsNullOrEmpty(url) && url.StartsWith("~") )         {             // resolve virtual path to physical file system             url = System.Web.HttpContext.Current.Server.MapPath(url);         }           try         {             log.Debug("Loading RSS feed: " + url);               using ( var reader = XmlReader.Create(url) )             {                 feed = SyndicationFeed.Load(reader);             }         }         catch ( Exception ex )         {             log.Error("Error occurred while loading RSS feed: " + url, ex);         }           return feed;     } You can now view the live demo or download the source code here, but be sure you have WCF RIA Services installed before running the application locally and make sure the FeedUrl is pointing to a valid location.  Please let me know if you have any comments or if you run into any issues with the code.   Enjoy!

    Read the article

  • How to list all my packages from command line which can show package name, license, source url, etc?

    - by YumYumYum
    How to get all the installed package list with there license, source url? Such as following only shows name of the package only. $ dpkg --get-selections acpi-support install acpid install adduser install adium-theme-ubuntu install aisleriot install alacarte install For example in Fedora/CentOS (RED HAT LINUX BRANCH), you can see that: $ yum info busybox Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit Available Packages Name : busybox Arch : i686 Epoch : 1 Version : 1.18.2 Release : 5.fc15 Size : 615 k Repo : updates Summary : Statically linked binary providing simplified versions of system commands URL : http://www.busybox.net License : GPLv2 Description : Busybox is a single binary which includes versions of a large number : of system commands, including a shell. This package can be very : useful for recovering from certain types of system failures, : particularly those involving broken shared libraries. Follow up: /var/lib/apt/lists$ ls extras.ubuntu.com_ubuntu_dists_natty_main_binary-amd64_Packages extras.ubuntu.com_ubuntu_dists_natty_main_source_Sources extras.ubuntu.com_ubuntu_dists_natty_Release extras.ubuntu.com_ubuntu_dists_natty_Release.gpg lock partial security.ubuntu.com_ubuntu_dists_natty-security_main_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_main_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_multiverse_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_multiverse_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_Release security.ubuntu.com_ubuntu_dists_natty-security_Release.gpg security.ubuntu.com_ubuntu_dists_natty-security_restricted_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_restricted_source_Sources security.ubuntu.com_ubuntu_dists_natty-security_universe_binary-amd64_Packages security.ubuntu.com_ubuntu_dists_natty-security_universe_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_main_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_main_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_multiverse_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_multiverse_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_Release us.archive.ubuntu.com_ubuntu_dists_natty_Release.gpg us.archive.ubuntu.com_ubuntu_dists_natty_restricted_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_restricted_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty_universe_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty_universe_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_main_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_main_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_Release us.archive.ubuntu.com_ubuntu_dists_natty-updates_Release.gpg us.archive.ubuntu.com_ubuntu_dists_natty-updates_restricted_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_restricted_source_Sources us.archive.ubuntu.com_ubuntu_dists_natty-updates_universe_binary-amd64_Packages us.archive.ubuntu.com_ubuntu_dists_natty-updates_universe_source_Sources

    Read the article

  • Is there a browser addon to redirect a link to another, modifying some address content automatically?

    - by kokbira
    Well, I'm looking for an addon that can redirect a link when I click on it in the following ways: Change from https to http Change from twitter.com/xxxxxxxxx to, for example, dabr.co.uk/xxxxxxxxx (added at 2010-02-15th, 20:30 GMT) Remove the "?utm_source=twitterfeed&utm_medium=twitter" from the end ou a URL Generally, replace a string with another (e.g. youtube->yt; so www.example.com/visitingyoutube would become www.example.com/visitingyt) PS: (added at 2010-02-15th, 20:30 GMT) @oKtosiTe, a clearer user case: Supposes that there is a link in Twitter that point to a URL X (URL X is http://www.newspapersite.com/2011-02-15_1304.html?utm_source=twitterfeed&utm_medium=twitter) In that case, I want to open that URL only until ".html", i.e., I want to open a URL Y, that is http://www.newspapersite.com/2011-02-15_1304.html What happens when I click normally in that link: 3.1. Browser goes to URL X What I want to happen when I click in that link: 4.1. The addon must transform URL X to URL Y (I must configure it before to change a piece of URL from "?utm_source=twitterfeed&utm_medium=twitter" to "" 4.2. The browser goes to URL Y

    Read the article

  • Does purposely linking to an invalid URL and then using 301 affect SEO?

    - by Mike
    On a section of my site, I am currently using .htaccess rewrites to put the ID as part of the URL instead of in the query, like so: RewriteRule ^([a-z_]+)?/?tours/([0-9]+)/(.*) /tours/tour_text.php?lang=$1&id=$2&urlstr=$3 [L] For example, if someone goes to /en/tours/12/some-text-here it will rewrite it to /tours/tour_text.php?lang=en&id=12&urlstr=some-text-here. However I don't want the users to be able to put just any text, so if they type in the wrong some-text-here part it will 301 redirect them to the right page. This works perfectly, but I can see a potential problem potential arising when localizing the website, so I just wanted to make sure it's not actually a problem. How it is now, if someone goes to /en/tours/12/some-text-here, the anchor to the Spanish version of that page will be /es/tours/12/some-text-here (i.e. only changing the "en" to "es"), and then the script will then 301 them to the correct Spanish text (something like /es/tours/12/algun-texto-aqui). And the reverse will also be the same. The anchor on the Spanish version to the English version would be /en/tours/12/algun-texto-aqui and then they will be forwarded with 301 back to /en/tours/12/some-text-here. Basically, the anchor changes the language and the 301 changes the string at the end. So I have two questions: Does purposely and permanently having invalid URLs on your site that get 301'ed to the correct ones have any effect on SEO? I could make it just show the correct URL to begin with, but this is a significant amount of work due to how I am handling the translations, so I would prefer just to 301 them. Will the invalid URLs that are contained in the links be added to the search engine indexes even if they get 301'ed to another page?

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • RedirectPermanent vs RewriteRule [R]

    - by notbrain
    I currently have a perm_redirects.conf file that gets included into my apache config stack where I have lines in the format RedirectPermanent /old/url/path /new/url/path It looks like I'm required to use an absolute URL for the new path, e.g.: http://example.com/new/url/path. In the logs I'm getting "incomplete redirect target /new/url/path was corrected to http://example.com/new/url/path." (paraphrased). In the 2.2 docs for RewriteRule, at the bottom they show the following as being a valid redirect, with only the url-paths instead of an abs URL for the right hand side of the redirect: RewriteRule ^/old/url/path(.*) /new/url/path$1 [R] But I can't seem to get that format to work to replicate the functionality of the RedirectPermanent version. Is this possible?

    Read the article

  • Why my register form don't accept spry validation in dw? [migrated]

    - by shaghayeh
    Why my register form don't accept spry validation in dw? Here is my code: <?php if(isset($_POST['go_register'])) { $last_login=jdate('Y/n/j ,H:i:s'); $firstname=trim($_POST['firstname']); $lastname=trim($_POST['lastname']); $email=trim($_POST['username']); $password=sha1($_POST['password']); $register_date=$_POST['register_date']; if(!empty($firstname)&& !empty($lastname)&& !empty($email)&& !empty($password)&& !empty($register_date)){ $reg_query="insert into users(firstname,lastname,email,password,register_date,last_login)values('$firstname','$lastname','$email','$password','$register_date','$last_login')"; $reg_result=mysqli_query($connection,$reg_query); if($reg_result){ echo "ok"; } }//end of not empty }//end of isset ?> <div class="sectionclass"> <script src="../SpryAssets/SpryValidationTextField.js" type="text/javascript"></script> <link href="../SpryAssets/SpryValidationTextField.css" rel="stylesheet" type="text/css"> <h2>??? ??? ???</h2> <form method="post" action="<?php echo $_SERVER['PHP_SELF']."?catid=2"; ?>"> <div> <span style="color:#F00">*</span> <label>??? </label> <span id="sprytextfield1"> <input type="text" name="firstname" /> <span class="textfieldRequiredMsg">A value is required.</span></span></div> <div> <span style="color:#F00">*</span> <label>??? ????????</label><input type="text" name="lastname" /> </div> <div> <span style="color:#F00">*</span> <label>??? ??????(?????)</label><input dir="ltr" type="text" name="username" /> </div> <div> <span style="color:#F00">*</span> <label>??? ????</label><input type="password" name="password" /> </div> <input type="hidden" name="register_date" value="<?php echo jdate('Y/n/j'); ?>"> <div> <input type="submit" name="go_register" value="??? ???"/> </div> </form> </div> <script type="text/javascript"> var sprytextfield1 = new Spry.Widget.ValidationTextField("sprytextfield1"); </script>

    Read the article

  • How can I add GET variables to the end of the current page url using a form with php?

    - by zeckdude
    I have some database information that is being shown on a page. I am using a pagination class that uses the $_GET['page'] variable in the url. When you click on a different pagination anchor tag, it changes $_GET['page'] to a new number in the url and shows the corresponding results. I have sort and search features which uses the $_GET['searchby'] and $_GET['search_input'] variables. The user enters their search or sort criteria on a form that is using GET. The variables are then put into the url allowing for the correct results to be shown. The problem I am having is that whenever I click on a pagination link, it adds that to end of the url and erases the search or sort GET variables. The same thing happens when I submit the search/sort form. How can I add GET variables to the end of the current page url using the anchor tag and search/sort form?

    Read the article

  • Regex for retrieving the parameter of the css url function...

    - by Kieron
    Hi, I'm trying to get the url portion of the following string: url(images/ui-bg_highlight-soft_75_cccccc_1x100.png) So the required part if images/ui-bg_highlight-soft_75_cccccc_1x100.png. Currently I've got this: url\((?<url>.*)\) But it seems to be choking on the following example: url(images/ui-bg_flat_0_aaaaaa_40x100.png) 50% 50% repeat-x; opacity: .30;filter:Alpha(Opacity=30) Which results in images/ui-bg_flat_0_aaaaaa_40x100.png) 50% 50% repeat-x; opacity: .30;filter:Alpha(Opacity=30... I'd like to make sure that it supports as many variations as possible (additional whitespace etc). Thanks! Kieron

    Read the article

  • Spring MVC; avoiding file extension in url?

    - by Ezombort
    I just started with Spring Web MVC. I'm trying to avoid file extenstions in the url. How can i do this? (I'm using Spring 2.5.x) Bean: <bean name="/hello.htm" class="springapp.web.HelloController"/> I want it to be: <bean name="/hello" class="springapp.web.HelloController"/> I cannot get it to work. Any ideas? Edit: Url-mapping <servlet-mapping> <servlet-name>springapp</servlet-name> <url-pattern>*.htm</url-pattern> </servlet-mapping> I have tried changing the url-pattern with no luck (* and /*).

    Read the article

  • Where to store selected language on multilingual site: session/cookies or url?

    - by tig
    I have a site that has all its content translated to multiple languages and has no accounts (to set prefered language there). I can detect preferred language using Accept-Language, ip or anything else. I have 3 ways to store user language selection: Detect language and store it in cookie/session and allow switching language (and also store it in cookie/session) Use detected language if there is no language specified in url, and show links to url with different language Use default site language and show links to other languages Storing langage in url can be of any type: different domain, subdomain, or somewhere in url I think about first case as it allows me to send one url to anyone and it will be presented to them in their preferred language. But another opinion is that different language means different data, so it must have different link.

    Read the article

  • goo.gl Api: How to know if URL was added to user's history?

    - by Manuel
    I'm using the goo.gl API as described here. It's easy to use and it works with or without authantication (I'm using OAuth). So, if I provide the OAuth token / secret, the shortened URL is added to the users history. My problem is, that I would like to show to the user that the shortened URL was added to his goo.gl history. The respone you get from the shortening request, however, is always the same, whether you use authorization or not: { "kind": "urlshortener#url", "id": "http://goo.gl/fbsS", "longUrl": "http://www.google.com/" } So, does anyone know a way to find out, if the shortened URL was successfully added to the user's history? Is there a parameter you can add to the request URL that leads to a more detailed result string?

    Read the article

  • how to invoke function writen in a validation.php and stroe all function name in database.

    - by saint
    Respected all, i'm in trouble. I created a file with name validation.php and store all my validation functionality in this file with different function names like, check_textbox(parm1, pram2, pram3, pram4) { // Definition here } check_chkBox(parm1, pram2) { // Definition here } and so on.....! then i created a table in mysql with the name tbValidation and stored all the function name with parameters in a table. the record stored in table look like as: interfaceid---------- functionNameWithReturnValue 1 ------------ check_textbox(parm1, pram2, pram3, pram4) = 1 2 ------------ check_textbox(parm1, pram2, pram4) = 0 3 ------------ check_textbox(parm1, pram2, pram4) = 1) AND (check_chkBox(parm1, pram2)=0) when i fetch record from database i want to invoke those functions that store in validation.php $data = mysql_fetch_array($drow); if($db->row_count > 0) { // when i fetch row one from database. I used this one but not working // @ $data[0] have value "check_textbox(parm1, pram2, pram3, pram4) = 1" if($data[0]) { // Do this } } How i can do this task...? :(

    Read the article

  • How do I shorten the repository URL using svn+ssh similar to svnserve -r?

    - by Marcus
    In the svnbook, it shows you how to shorten the URL to your repositories when using svnserve as a daemon, using -r like: svnserve -d -r /usr/local/repositories That way, you can refer to the repository you need right after the hostname in the URL without revealing any of the local path (which is /usr/local/repositories/project1): svn checkout svn://host.example.com/project1 However, now that I am switching to svn+ssh, I have the local path back in my repository URL: svn checkout svn+ssh://host.example.com/usr/local/repositories/project1 Does anyone know how to hide that local path and use a shorter URL as up above, using svn+ssh and WITHOUT using a UNIX soft link on the svn server? (you still end up with an extra string in the URL if you use a soft link...) UPDATE: The solution to this can be found in the accepted answer over on ServerFault (the green-checked answer). Yay!

    Read the article

  • How do you validate a URL with a regular expression in Python?

    - by Zachary Spencer
    I'm building a Google App Engine app, and I have a class to represent an RSS Feed. I have a method called setUrl which is part of the feed class. It accepts a url as an input. I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (http://www.ietf.org/rfc/rfc3986.txt) Below is a snipped which should work, right? I'm incredibly new to Python and have been beating my head against this for the past 3 days. p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?') m = p.match(url) if m: self.url = url return url

    Read the article

  • How to reduce the number of cert validation requests... (IE is killing me slowly)

    - by scooterhanson
    On a customer's internal network, I can make a request to my SSL site using IE6 SP1 (on Win2K) and see one cert validation requests, but if I use IE6 SP2 (on XP) 13 separate cert validation requests get fired off. Needless to say, this slows down my page load a lot. Firefox loads the page just fine with no unnecessary cert validation requests. The server is Apache running a pretty new lampp stack. All the server certificate / CA chain configurations seem to be fine (users can authenticate w/ trusted certs, the system can communicate to other systems with that server cert, etc.) Is there anything I can do from a configuration standpoint? Any other ideas at all?

    Read the article

  • CoreData, transient atribute and EXC_BAD_ACCESS.

    - by Lukasz
    I'm trying to build simple file browser and i'm stuck. I defined classes, build window, add controllers, views.. Everything works but only ONE time. Selecting again Folder in NSTableView or trying to get data from Folder.files causing silent EXC_BAD_ACCESS (code=13, address0x0) from main. Info about files i keep outside of CoreData, in simple class, I don't want to save them: #import <Foundation/Foundation.h> @interface TPDrawersFileInfo : NSObject @property (nonatomic, retain) NSString * filename; @property (nonatomic, retain) NSString * extension; @property (nonatomic, retain) NSDate * creation; @property (nonatomic, retain) NSDate * modified; @property (nonatomic, retain) NSNumber * isFile; @property (nonatomic, retain) NSNumber * size; @property (nonatomic, retain) NSNumber * label; +(TPDrawersFileInfo *) initWithURL: (NSURL *) url; @end @implementation TPDrawersFileInfo +(TPDrawersFileInfo *) initWithURL: (NSURL *) url { TPDrawersFileInfo * new = [[TPDrawersFileInfo alloc] init]; if (new!=nil) { NSFileManager * fileManager = [NSFileManager defaultManager]; NSError * error; NSDictionary * infoDict = [fileManager attributesOfItemAtPath: [url path] error:&error]; id labelValue = nil; [url getResourceValue:&labelValue forKey:NSURLLabelNumberKey error:&error]; new.label = labelValue; new.size = [infoDict objectForKey: @"NSFileSize"]; new.modified = [infoDict objectForKey: @"NSFileModificationDate"]; new.creation = [infoDict objectForKey: @"NSFileCreationDate"]; new.isFile = [NSNumber numberWithBool:[[infoDict objectForKey:@"NSFileType"] isEqualToString:@"NSFileTypeRegular"]]; new.extension = [url pathExtension]; new.filename = [[url lastPathComponent] stringByDeletingPathExtension]; } return new; } Next I have class Folder, which is NSManagesObject subclass // Managed Object class to keep info about folder content @interface Folder : NSManagedObject { NSArray * _files; } @property (nonatomic, retain) NSArray * files; // Array with TPDrawersFileInfo objects @property (nonatomic, retain) NSString * url; // url of folder -(void) reload; //if url changed, load file info again. @end @implementation Folder @synthesize files = _files; @dynamic url; -(void)awakeFromInsert { [self addObserver:self forKeyPath:@"url" options:NSKeyValueObservingOptionNew context:@"url"]; } -(void)awakeFromFetch { [self addObserver:self forKeyPath:@"url" options:NSKeyValueObservingOptionNew context:@"url"]; } -(void)prepareForDeletion { [self removeObserver:self forKeyPath:@"url"]; } -(void) observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (context == @"url") { [self reload]; } } -(void) reload { NSMutableArray * result = [NSMutableArray array]; NSError * error = nil; NSFileManager * fileManager = [NSFileManager defaultManager]; NSString * percented = [self.url stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]; NSArray * listDir = [fileManager contentsOfDirectoryAtURL: [NSURL URLWithString: percented] includingPropertiesForKeys: [NSArray arrayWithObject: NSURLCreationDateKey ] options:NSDirectoryEnumerationSkipsHiddenFiles error:&error]; if (error!=nil) {NSLog(@"Error <%@> reading <%@> content", error, self.url);} for (id fileURL in listDir) { TPDrawersFileInfo * fi = [TPDrawersFileInfo initWithURL:fileURL]; [result addObject: fi]; } _files = [NSArray arrayWithArray:result]; } @end In app delegate i defined @interface TPAppDelegate : NSObject <NSApplicationDelegate> { IBOutlet NSArrayController * foldersController; Folder * currentFolder; } - (IBAction)chooseDirectory:(id)sender; // choose folder and - (Folder * ) getFolderObjectForPath: path { //gives Folder object if already exist or nil if not ..... } - (IBAction)chooseDirectory:(id)sender { //Opens panel, asking for url NSOpenPanel * panel = [NSOpenPanel openPanel]; [panel setCanChooseDirectories:YES]; [panel setCanChooseFiles:NO]; [panel setMessage:@"Choose folder to show:"]; NSURL * currentDirectory; if ([panel runModal] == NSOKButton) { currentDirectory = [[panel URLs] objectAtIndex:0]; } Folder * folderObject = [self getFolderObjectForPath:[currentDirectory path]]; if (folderObject) { //if exist: currentFolder = folderObject; } else { // create new one Folder * newFolder = [NSEntityDescription insertNewObjectForEntityForName:@"Folder" inManagedObjectContext:self.managedObjectContext]; [newFolder setValue:[currentDirectory path] forKey:@"url"]; [foldersController addObject:newFolder]; currentFolder = newFolder; } [foldersController setSelectedObjects:[NSArray arrayWithObject:currentFolder]]; } Please help ;)

    Read the article

  • How to define a url not found servlet mapping in web.xml?

    - by user246114
    Hi, I have a web app, I want to define my index.jsp file to be shown when the entered url is like: www.mysite.com www.mysite.com/ www.mysite.com/index.jsp but if any other url is entered, like: wwww.mysite.com/g I want a particular servlet to handle the request. In my web.xml file, I am doing this: <servlet> <servlet-name>ServletCore</servlet-name> <servlet-class>com.me.test.ServletCore</servlet-class> </servlet> <servlet-mapping> <servlet-name>ServletCore</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> so that is letting the ServletCore servlet pick up any url, but as expected it is taking over even the: www.mysite.com/index.jsp type urls. How can I define it in such a way to work how I want? Thank you

    Read the article

  • Android Phonegap - TIMEOUT ERROR when trying to set a WebViewClient

    - by Spike777
    I'm working with Android and Phonegap, and at the moment I'm having trouble with one simple thing. I need to setup a webViewClient to the PhoneGap webView in order to capture the URL of a page finished and to work with that. This is the code: public class PhoneGapTest extends DroidGap { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); super.setBooleanProperty("loadInWebView", true); super.clearCache(); super.keepRunning = false; super.loadUrl("file:///android_asset/www/index.html"); super.appView.setWebViewClient(new WebViewClient(){ @Override public void onPageStarted(WebView view, String url, Bitmap bitmap) { Log.i("TEST", "onPageStarted: " + url); } @Override public void onPageFinished(WebView view, String url) { Log.i("TEST", "onPageFinished: " + url); } }); } That code doesn't seems to work, the page never loads and I get a TIMEOUT ERROR, but if I remove the "setWebViewClient" part the page loads perfectly. I saw that there is a class CordovaWebViewClient, do I have to use that instead of WebViewClient? I found this way on the web: this.appView.setWebViewClient(new CordovaWebViewClient(this){ @Override public boolean shouldOverrideUrlLoading(final WebView view, String url) { Log.i("BugTest", "shouldOverrideUrlLoading: " + url); return true; } @Override public void onPageStarted(WebView view, String url, Bitmap bitmap) { Log.i("TEST", "onPageStarted: " + url); } @Override public void onPageFinished(WebView view, String url) { Log.i("TEST", "onPageFinished: " + url); } @Override public void doUpdateVisitedHistory(WebView view, String url, boolean isReload){ } }); But that code isn't working either, I still got a TIMEOUT ERROR. I also saw that there is already a webVieClient member, but I don't if I have to use it and how. I'm working with Phonegap version 1.9.0 Thanks for reading Answer to Simon: This doesn't work either, I still receive a TIMEOUT ERROR, there is something wrong? public class MainActivity extends DroidGap { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); super.init(); super.appView.clearCache(true); super.appView.clearHistory(); this.appView.setWebViewClient(new CustomCordovaWebViewClient(this)); super.loadUrl("file:///android_asset/www/index.html"); } public class CustomCordovaWebViewClient extends CordovaWebViewClient { public CustomCordovaWebViewClient(DroidGap ctx) { super(ctx); } @Override public void onPageStarted(WebView view, String url, Bitmap bitmap) { Log.i("TEST", "onPageStarted: " + url); } @Override public void onPageFinished(WebView view, String url) { Log.i("TEST", "onPageFinished: " + url); } @Override public void doUpdateVisitedHistory(WebView view, String url, boolean isReload){ } @Override public void onReceivedError(WebView view, int errorCode, String description, String failingUrl) { } } }

    Read the article

  • Announcing release of ASP.NET MVC 3, IIS Express, SQL CE 4, Web Farm Framework, Orchard, WebMatrix

    - by ScottGu
    I’m excited to announce the release today of several products: ASP.NET MVC 3 NuGet IIS Express 7.5 SQL Server Compact Edition 4 Web Deploy and Web Farm Framework 2.0 Orchard 1.0 WebMatrix 1.0 The above products are all free. They build upon the .NET 4 and VS 2010 release, and add a ton of additional value to ASP.NET (both Web Forms and MVC) and the Microsoft Web Server stack. ASP.NET MVC 3 Today we are shipping the final release of ASP.NET MVC 3.  You can download and install ASP.NET MVC 3 here.  The ASP.NET MVC 3 source code (released under an OSI-compliant open source license) can also optionally be downloaded here. ASP.NET MVC 3 is a significant update that brings with it a bunch of great features.  Some of the improvements include: Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to continuing to support/enhance the existing .aspx view engine).  Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, with Razor you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type.  You can learn more about Razor from some of the blog posts I’ve done about it over the last 6 months Introducing Razor New @model keyword in Razor Layouts with Razor Server-Side Comments with Razor Razor’s @: and <text> syntax Implicit and Explicit code nuggets with Razor Layouts and Sections with Razor Today’s release supports full code intellisense support for Razor (both VB and C#) with Visual Studio 2010 and the free Visual Web Developer 2010 Express. JavaScript Improvements ASP.NET MVC 3 enables richer JavaScript scenarios and takes advantage of emerging HTML5 capabilities. The AJAX and Validation helpers in ASP.NET MVC 3 now use an Unobtrusive JavaScript based approach.  Unobtrusive JavaScript avoids injecting inline JavaScript into HTML, and enables cleaner separation of behavior using the new HTML 5 “data-“ attribute convention (which conveniently works on older browsers as well – including IE6). This keeps your HTML tight and clean, and makes it easier to optionally swap out or customize JS libraries.  ASP.NET MVC 3 now includes built-in support for posting JSON-based parameters from client-side JavaScript to action methods on the server.  This makes it easier to exchange data across the client and server, and build rich JavaScript front-ends.  We think this capability will be particularly useful going forward with scenarios involving client templates and data binding (including the jQuery plugins the ASP.NET team recently contributed to the jQuery project).  Previous releases of ASP.NET MVC included the core jQuery library.  ASP.NET MVC 3 also now ships the jQuery Validate plugin (which our validation helpers use for client-side validation scenarios).  We are also now shipping and including jQuery UI by default as well (which provides a rich set of client-side JavaScript UI widgets for you to use within projects). Improved Validation ASP.NET MVC 3 includes a bunch of validation enhancements that make it even easier to work with data. Client-side validation is now enabled by default with ASP.NET MVC 3 (using an onbtrusive javascript implementation).  Today’s release also includes built-in support for Remote Validation - which enables you to annotate a model class with a validation attribute that causes ASP.NET MVC to perform a remote validation call to a server method when validating input on the client. The validation features introduced within .NET 4’s System.ComponentModel.DataAnnotations namespace are now supported by ASP.NET MVC 3.  This includes support for the new IValidatableObject interface – which enables you to perform model-level validation, and allows you to provide validation error messages specific to the state of the overall model, or between two properties within the model.  ASP.NET MVC 3 also supports the improvements made to the ValidationAttribute class in .NET 4.  ValidationAttribute now supports a new IsValid overload that provides more information about the current validation context, such as what object is being validated.  This enables richer scenarios where you can validate the current value based on another property of the model.  We’ve shipped a built-in [Compare] validation attribute  with ASP.NET MVC 3 that uses this support and makes it easy out of the box to compare and validate two property values. You can use any data access API or technology with ASP.NET MVC.  This past year, though, we’ve worked closely with the .NET data team to ensure that the new EF Code First library works really well for ASP.NET MVC applications.  These two posts of mine cover the latest EF Code First preview and demonstrates how to use it with ASP.NET MVC 3 to enable easy editing of data (with end to end client+server validation support).  The final release of EF Code First will ship in the next few weeks. Today we are also publishing the first preview of a new MvcScaffolding project.  It enables you to easily scaffold ASP.NET MVC 3 Controllers and Views, and works great with EF Code-First (and is pluggable to support other data providers).  You can learn more about it – and install it via NuGet today - from Steve Sanderson’s MvcScaffolding blog post. Output Caching Previous releases of ASP.NET MVC supported output caching content at a URL or action-method level. With ASP.NET MVC V3 we are also enabling support for partial page output caching – which allows you to easily output cache regions or fragments of a response as opposed to the entire thing.  This ends up being super useful in a lot of scenarios, and enables you to dramatically reduce the work your application does on the server.  The new partial page output caching support in ASP.NET MVC 3 enables you to easily re-use cached sub-regions/fragments of a page across multiple URLs on a site.  It supports the ability to cache the content either on the web-server, or optionally cache it within a distributed cache server like Windows Server AppFabric or memcached. I’ll post some tutorials on my blog that show how to take advantage of ASP.NET MVC 3’s new output caching support for partial page scenarios in the future. Better Dependency Injection ASP.NET MVC 3 provides better support for applying Dependency Injection (DI) and integrating with Dependency Injection/IOC containers. With ASP.NET MVC 3 you no longer need to author custom ControllerFactory classes in order to enable DI with Controllers.  You can instead just register a Dependency Injection framework with ASP.NET MVC 3 and it will resolve dependencies not only for Controllers, but also for Views, Action Filters, Model Binders, Value Providers, Validation Providers, and Model Metadata Providers that you use within your application. This makes it much easier to cleanly integrate dependency injection within your projects. Other Goodies ASP.NET MVC 3 includes dozens of other nice improvements that help to both reduce the amount of code you write, and make the code you do write cleaner.  Here are just a few examples: Improved New Project dialog that makes it easy to start new ASP.NET MVC 3 projects from templates. Improved Add->View Scaffolding support that enables the generation of even cleaner view templates. New ViewBag property that uses .NET 4’s dynamic support to make it easy to pass late-bound data from Controllers to Views. Global Filters support that allows specifying cross-cutting filter attributes (like [HandleError]) across all Controllers within an app. New [AllowHtml] attribute that allows for more granular request validation when binding form posted data to models. Sessionless controller support that allows fine grained control over whether SessionState is enabled on a Controller. New ActionResult types like HttpNotFoundResult and RedirectPermanent for common HTTP scenarios. New Html.Raw() helper to indicate that output should not be HTML encoded. New Crypto helpers for salting and hashing passwords. And much, much more… Learn More about ASP.NET MVC 3 We will be posting lots of tutorials and samples on the http://asp.net/mvc site in the weeks ahead.  Below are two good ASP.NET MVC 3 tutorials available on the site today: Build your First ASP.NET MVC 3 Application: VB and C# Building the ASP.NET MVC 3 Music Store We’ll post additional ASP.NET MVC 3 tutorials and videos on the http://asp.net/mvc site in the future. Visit it regularly to find new tutorials as they are published. How to Upgrade Existing Projects ASP.NET MVC 3 is compatible with ASP.NET MVC 2 – which means it should be easy to update existing MVC projects to ASP.NET MVC 3.  The new features in ASP.NET MVC 3 build on top of the foundational work we’ve already done with the MVC 1 and MVC 2 releases – which means that the skills, knowledge, libraries, and books you’ve acquired are all directly applicable with the MVC 3 release.  MVC 3 adds new features and capabilities – it doesn’t obsolete existing ones. You can upgrade existing ASP.NET MVC 2 projects by following the manual upgrade steps in the release notes.  Alternatively, you can use this automated ASP.NET MVC 3 upgrade tool to easily update your  existing projects. Localized Builds Today’s ASP.NET MVC 3 release is available in English.  We will be releasing localized versions of ASP.NET MVC 3 (in 9 languages) in a few days.  I’ll blog pointers to the localized downloads once they are available. NuGet Today we are also shipping NuGet – a free, open source, package manager that makes it easy for you to find, install, and use open source libraries in your projects. It works with all .NET project types (including ASP.NET Web Forms, ASP.NET MVC, WPF, WinForms, Silverlight, and Class Libraries).  You can download and install it here. NuGet enables developers who maintain open source projects (for example, .NET projects like Moq, NHibernate, Ninject, StructureMap, NUnit, Windsor, Raven, Elmah, etc) to package up their libraries and register them with an online gallery/catalog that is searchable.  The client-side NuGet tools – which include full Visual Studio integration – make it trivial for any .NET developer who wants to use one of these libraries to easily find and install it within the project they are working on. NuGet handles dependency management between libraries (for example: library1 depends on library2). It also makes it easy to update (and optionally remove) libraries from your projects later. It supports updating web.config files (if a package needs configuration settings). It also allows packages to add PowerShell scripts to a project (for example: scaffold commands). Importantly, NuGet is transparent and clean – and does not install anything at the system level. Instead it is focused on making it easy to manage libraries you use with your projects. Our goal with NuGet is to make it as simple as possible to integrate open source libraries within .NET projects.  NuGet Gallery This week we also launched a beta version of the http://nuget.org web-site – which allows anyone to easily search and browse an online gallery of open source packages available via NuGet.  The site also now allows developers to optionally submit new packages that they wish to share with others.  You can learn more about how to create and share a package here. There are hundreds of open-source .NET projects already within the NuGet Gallery today.  We hope to have thousands there in the future. IIS Express 7.5 Today we are also shipping IIS Express 7.5.  IIS Express is a free version of IIS 7.5 that is optimized for developer scenarios.  It works for both ASP.NET Web Forms and ASP.NET MVC project types. We think IIS Express combines the ease of use of the ASP.NET Web Server (aka Cassini) currently built-into Visual Studio today with the full power of IIS.  Specifically: It’s lightweight and easy to install (less than 5Mb download and a quick install) It does not require an administrator account to run/debug applications from Visual Studio It enables a full web-server feature set – including SSL, URL Rewrite, and other IIS 7.x modules It supports and enables the same extensibility model and web.config file settings that IIS 7.x support It can be installed side-by-side with the full IIS web server as well as the ASP.NET Development Server (they do not conflict at all) It works on Windows XP and higher operating systems – giving you a full IIS 7.x developer feature-set on all Windows OS platforms IIS Express (like the ASP.NET Development Server) can be quickly launched to run a site from a directory on disk.  It does not require any registration/configuration steps. This makes it really easy to launch and run for development scenarios.  You can also optionally redistribute IIS Express with your own applications if you want a lightweight web-server.  The standard IIS Express EULA now includes redistributable rights. Visual Studio 2010 SP1 adds support for IIS Express.  Read my VS 2010 SP1 and IIS Express blog post to learn more about what it enables.  SQL Server Compact Edition 4 Today we are also shipping SQL Server Compact Edition 4 (aka SQL CE 4).  SQL CE is a free, embedded, database engine that enables easy database storage. No Database Installation Required SQL CE does not require you to run a setup or install a database server in order to use it.  You can simply copy the SQL CE binaries into the \bin directory of your ASP.NET application, and then your web application can use it as a database engine.  No setup or extra security permissions are required for it to run. You do not need to have an administrator account on the machine. Just copy your web application onto any server and it will work. This is true even of medium-trust applications running in a web hosting environment. SQL CE runs in-memory within your ASP.NET application and will start-up when you first access a SQL CE database, and will automatically shutdown when your application is unloaded.  SQL CE databases are stored as files that live within the \App_Data folder of your ASP.NET Applications. Works with Existing Data APIs SQL CE 4 works with existing .NET-based data APIs, and supports a SQL Server compatible query syntax.  This means you can use existing data APIs like ADO.NET, as well as use higher-level ORMs like Entity Framework and NHibernate with SQL CE.  This enables you to use the same data programming skills and data APIs you know today. Supports Development, Testing and Production Scenarios SQL CE can be used for development scenarios, testing scenarios, and light production usage scenarios.  With the SQL CE 4 release we’ve done the engineering work to ensure that SQL CE won’t crash or deadlock when used in a multi-threaded server scenario (like ASP.NET).  This is a big change from previous releases of SQL CE – which were designed for client-only scenarios and which explicitly blocked running in web-server environments.  Starting with SQL CE 4 you can use it in a web-server as well. There are no license restrictions with SQL CE.  It is also totally free. Tooling Support with VS 2010 SP1 Visual Studio 2010 SP1 adds support for SQL CE 4 and ASP.NET Projects.  Read my VS 2010 SP1 and SQL CE 4 blog post to learn more about what it enables.  Web Deploy and Web Farm Framework 2.0 Today we are also releasing Microsoft Web Deploy V2 and Microsoft Web Farm Framework V2.  These services provide a flexible and powerful way to deploy ASP.NET applications onto either a single server, or across a web farm of machines. You can learn more about these capabilities from my previous blog posts on them: Introducing the Microsoft Web Farm Framework Automating Deployment with Microsoft Web Deploy Visit the http://iis.net website to learn more and install them. Both are free. Orchard 1.0 Today we are also releasing Orchard v1.0.  Orchard is a free, open source, community based project.  It provides Content Management System (CMS) and Blogging System support out of the box, and makes it possible to easily create and manage web-sites without having to write code (site owners can customize a site through the browser-based editing tools built-into Orchard).  Read these tutorials to learn more about how you can setup and manage your own Orchard site. Orchard itself is built as an ASP.NET MVC 3 application using Razor view templates (and by default uses SQL CE 4 for data storage).  Developers wishing to extend an Orchard site with custom functionality can open and edit it as a Visual Studio project – and add new ASP.NET MVC Controllers/Views to it.  WebMatrix 1.0 WebMatrix is a new, free, web development tool from Microsoft that provides a suite of technologies that make it easier to enable website development.  It enables a developer to start a new site by browsing and downloading an app template from an online gallery of web applications (which includes popular apps like Umbraco, DotNetNuke, Orchard, WordPress, Drupal and Joomla).  Alternatively it also enables developers to create and code web sites from scratch. WebMatrix is task focused and helps guide developers as they work on sites.  WebMatrix includes IIS Express, SQL CE 4, and ASP.NET - providing an integrated web-server, database and programming framework combination.  It also includes built-in web publishing support which makes it easy to find and deploy sites to web hosting providers. You can learn more about WebMatrix from my Introducing WebMatrix blog post this summer.  Visit http://microsoft.com/web to download and install it today. Summary I’m really excited about today’s releases – they provide a bunch of additional value that makes web development with ASP.NET, Visual Studio and the Microsoft Web Server a lot better.  A lot of folks worked hard to share this with you today. On behalf of my whole team – we hope you enjoy them! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • one page has more than one url. search engines give penalty please help me out.

    - by Ali Demirtas
    Hi I am using prestashop as the cart for my website. I have a problem; the website used to be in dynamic urls. I enabled friendly url writing. The problem is that one page has more than one url. You can access a same page from the dynamic url and static url. In fact a single page has 9 different urls. This obviously creates problems for seo as search engiones penalize my website for this. What can I do to solve this problem? I have no knowledge of programming. Here is the htaccess for the website. Any sample code or help is really appreciated. URL rewriting module activation RewriteEngine on URL rewriting rules RewriteRule ^([a-z0-9]+)-([a-z0-9]+)(-[_a-zA-Z0-9-]*)/([_a-zA-Z0-9-]*).jpg$ /img/p/$1-$2$3.jpg [L,E] RewriteRule ^([0-9]+)-([0-9]+)/([_a-zA-Z0-9-]*).jpg$ /img/p/$1-$2.jpg [L,E] RewriteRule ^([0-9]+)(-[_a-zA-Z0-9-]*)/([_a-zA-Z0-9-]).jpg$ /img/c/$1$2.jpg [L,E] RewriteRule ^lang-([a-z]{2})/([a-zA-Z0-9-])/([0-9]+)-([a-zA-Z0-9-]).html(.)$ /product.php?id_product=$3&isolang=$1$5 [L,E] RewriteRule ^lang-([a-z]{2})/([0-9]+)-([a-zA-Z0-9-]).html(.)$ /product.php?id_product=$2&isolang=$1$4 [L,E] RewriteRule ^lang-([a-z]{2})/([0-9]+)-([a-zA-Z0-9-])(.)$ /category.php?id_category=$2&isolang=$1 [QSA,L,E] RewriteRule ^([a-zA-Z0-9-])/([0-9]+)-([a-zA-Z0-9-]).html(.*)$ /product.php?id_product=$2$4 [L,E] RewriteRule ^([0-9]+)-([a-zA-Z0-9-]).html(.)$ /product.php?id_product=$1$3 [L,E] RewriteRule ^([0-9]+)-([a-zA-Z0-9-])(.)$ /category.php?id_category=$1 [QSA,L,E] RewriteRule ^content/([0-9]+)-([a-zA-Z0-9-])(.)$ /cms.php?id_cms=$1 [QSA,L,E] RewriteRule ^([0-9]+)__([a-zA-Z0-9-])(.)$ /supplier.php?id_supplier=$1$3 [QSA,L,E] RewriteRule ^([0-9]+)_([a-zA-Z0-9-])(.)$ /manufacturer.php?id_manufacturer=$1$3 [QSA,L,E] RewriteRule ^lang-([a-z]{2})/(.*)$ /$2?isolang=$1 [QSA,L,E] Catch 404 errors ErrorDocument 404 /404.php Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^.com [NC] RewriteRule ^(.)$ http://www.*.com/$1 [L,R=301] Options +FollowSymLinks RewriteEngine on index.php to / RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.index.php\ HTTP/ RewriteRule ^(.)index.php$ /$1 [R=301,L] Header set Cache-Control: "no-store, no-cache, must-revalidate, pre-check=0, post-check=0, max-age=0"

    Read the article

  • What does ~ in the beginning of an URL in asp.net exactly do ?

    - by MarceloRamires
    I am editing a certain website which before used the port 80 (default) that was not required at the url (because it's default..) But the port had (for technical reasons) to be changed, and now it has to be informed. I can access the main page through ip:port\page like this: 1.2.3.4:81\page.aspx Every link in the website is composed like this: <asp:HyperLink runat="server" Text="random" NavigateUrl="~/fdr/whatever.aspx" /> And whenever I click on a link, the page doesn't load, but the URL is composed on the URL bar of the browser, then I simply add ":80" after the IP in the URL and it works. Due to the existance of querystrings (in other words, for already having access to the URL) I before thought that '~' in the beginning of a URL in a link was saying "keep in the same website, just change to this webpage in this folder", but if the port vanishes, I assume now that the address is requested (probably to IIS) the location of the current website. I want to know then (instead of having to add the port to each link in my website) how do I set up whoever is requested by the ~ in the link to add the port somehow. How do I do that?

    Read the article

  • What do I have to change in my PHP/CURL code to retrieve data from a https:// URL?

    - by Edward Tanguay
    I have a PHP file using CURL that accepts a Google Doc URL as a parameter, then returns the plain text of the Google Doc. It worked well until recently when apparently a redirect was added so that the http:// address redirects to the equivalent https:// address, as in this example: http://docs.google.com/View?id=dc7gj86r_20dn2csqg3 So I changed my code to access the https:// address, but it just returns blank. What do I have to change my CURL code so that I can get the HTML text from the https:// address? $url = filter_input(INPUT_GET, 'url',FILTER_SANITIZE_STRING); $validUrlPrefixes[] = "https://docs.google.com"; if(beginsWithOneOfThese($url, $validUrlPrefixes)) { $user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729)'; $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIEJAR, "/tmp/cookie"); curl_setopt($ch, CURLOPT_COOKIEFILE, "/tmp/cookie"); curl_setopt($ch, CURLOPT_URL, $url ); curl_setopt($ch, CURLOPT_FAILONERROR, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_TIMEOUT, 15); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_VERBOSE, 0); $rawData = curl_exec($ch); $rawData = cleanText($rawData); if(beginsWith($url, "https://docs.google.com")) { echo qstr::convertGoogleDocContentToText($rawData); die; } echo $rawData; die;

    Read the article

  • Unable to retrieve search results from server side : Facebook Graph API usig Python

    - by DjangoRocks
    Hi all, I'm doing some simple Python + FB Graph training on my own, and I faced a weird problem: import time import sys import urllib2 import urllib from json import loads base_url = "https://graph.facebook.com/search?q=" post_id = None post_type = None user_id = None message = None created_time = None def doit(hour): page = 1 search_term = "\"Plastic Planet\"" encoded_search_term = urllib.quote(search_term) print encoded_search_term type="&type=post" url = "%s%s%s" % (base_url,encoded_search_term,type) print url while(1): try: response = urllib2.urlopen(url) except urllib2.HTTPError, e: print e finally: pass content = response.read() content = loads(content) print "==================================" for c in content["data"]: print c print "****************************************" try: content["paging"] print "current URL" print url print "next page!------------" url = content["paging"]["next"] print url except: pass finally: pass """ print "new URL is =======================" print url print "==================================" """ print url What I'm trying to do here is to automatically page through the search results, but trying for content["paging"]["next"] But the weird thing is that no data is returned; i received the following: {"data":[]} Even in the very first loop. But when i copied the URL into a browser, a lot of results were returned. I've also tried a version with my access token and th same thing happens. Can anyone enlighten me? +++++++++++++++++++EDITED and SIMPLIFIED++++++++++++++++++ ok thanks to TryPyPy, here's the simplified and edited version of my previous question: Why is that: import urllib2 url = "https://graph.facebook.com/searchq=%22Plastic+Planet%22&type=post&limit=25&until=2010-12-29T19%3A54%3A56%2B0000" response = urllib2.urlopen(url) print response.read() result in {"data":[]} ? But the same url produces a lot of data in a browser? Anyone? Best Regards.

    Read the article

  • Can Campaign URL tags cause a soft 404 error?

    - by user35306
    I was checking out one of my company's website's Webmaster Tools to analyze the cause behind some soft 404 errors and discovered that a few of the older errors had affiliate mp referral tags listed as the relative URLs. Since these are older problems and I don't seem too many of them coming up in the last few months I don't think it's still a problem. I'm just curious if it's possible to cause a soft 404 by improperly copying the campaign or referral tag into the URL.

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >