Search Results

Search found 5416 results on 217 pages for 'urls py'.

Page 113/217 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Job Search Engine Url Structure Issue [closed]

    - by Justin
    Possible Duplicate: What is the best stucture of SEO friendly URL? I am working on a job board, and i'm trying to figure out a good design for URL structure. Some things that I have found through research: 100 - 150 Chars long is ideal 3-5 words in your url, according to Matt Cutts Use .htaccess to force clean urls Do not duplicate data (important) Clean and precise, describing the content Use hyphens On the homepage, I try to detect the users location based on IP, but this isn't always accurate, and not always reliable. So until they put in their city/location, I can't always use this structure but this is potentially work-able. For Searching, a form post to a results page: domain.com/jobs/[city]/[search] ie: domain.com/jobs/toronto/sales manager/ OR domain.com/search/jobs/toronto/sales manager/ or do I remove the word JOBS and just use Search. I trying to keep good search terms in the URL, but also keep it clean and concise. Can someone give me some feedback and thoughts to 'why'...

    Read the article

  • Website with sections in Drupal?

    - by Matt Hampel
    What is the best way to create a website with sections in Drupal? Users need to be able to add, remove, and nest pages fairly easily. Pages added to a section should have an appropriate URL, like "/[section name]/[page title]". This seems like a straightforward task, but I can't find the right combination of tools to do it. Subsite comes close, but for some odd reason, doesn't set up the correct content paths. The closest I got was creating a book for each subsection, but that feels like I'm using the wrong tool for the job. Edited with my solution: I used organic groups with pathauto. I set pathauto so that pages in groups had URLs that were of the form [group path]/[page title].

    Read the article

  • Crashing trying to install Ubuntu 12.04 LTS

    - by Daniel Evans
    Hardware: Dell Inspiron 1545 Steps are as follows: 1. Insert 64 bit Ubuntu 12.04 disc 2. Boot computer Output is as follows: error: unexpectedly disconnected from boot status daemon Generating locales... en_US.UTF-8... done Generation complete. MEMORY-ERROR: glib-compile-schemas[569]: GSlice: assertion failed: aligned_memory == (gpointer) addr Aborted pwconv: failed to change the mode of /etc/passwd- to 0600 MEMORY-ERROR: [996]: GSlice: assertion failed: aligned_memory == (gpointer) addr MEMORY-ERROR: glib-compile-scehmas[1034]: GSlice: assertion failed: aligned_memory == gpointer) addr Aborted /usr/lib/python2.7/dist-packages/LanguageSelector/LocaleInfo.py:256: UserWarning: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory warnings.warn(msg.args[0].encode('UTF-8')) Using CD-ROM mount point /cdrom ... etc etc... End up at a prompt line ubuntu@ubuntu:~$

    Read the article

  • Tracking AdWord ads with different text in Google Analytics

    - by at01
    I'm trying to see how the text in my Google AdWords ads affects my metrics in Analytics. I have auto-linking enabled, so I figured I would be able to automatically see this in Analytics. Unfortunately, if I try to add a second dimension of Traffic Sources-Ad Content, the metrics are only split by the ad's Headline. Most of my tests are changing only the ads' descriptions... So I guess I need to add a tracking parameter like ?campaign=special_text to my URLs? Or is there a way to see the ads split by ad descriptions? Should I add the full suite of utm_campaign/utm_medium/etc parameters? What's the proper way to track these ads which are mostly similar except the ad descriptions?

    Read the article

  • Will Tracking Subdomains as Single Entity with Google Analytics Help SEO? [closed]

    - by Sam Gridley
    Possible Duplicate: Does Google Analytics data affect SEO? We have two subdomains, one for our blog and one for our ecommerce store. The blog serves to bring traffic and the store is how we monetize the site. We have them designed to appear as one large site, but I know google sees them as two sites. Here is how the subdomains look: www.example.com (store) blog.example.com (blog) I believe I can configure analytics to use subdomain tracking as explained here: http://support.google.com/googleanalytics/bin/answer.py?hl=en&answer=55524 But my question is whether this will cause google to see our 2 subdomains as one larger domain for SEO purposes. In other words, is there any relationship to how you configure google analytics and how google indexes and ranks your website(s) and pages? Is there anything I need to do in anaytics or webmaster tools to make google aware that these two subdomains work together as one website? Thanks! Sam

    Read the article

  • Keep getting messages about internal system errors

    - by Tomas Lycken
    I keep getting popups about internal system errors (see screenshot below) on irregular intervals (several times a day), that I don't know what to do about. If I continue through the dialog and try to report the error back to the Ubuntu project, I get a message stating that development on this version of Ubuntu has been completed, and that I should ask for help here if I don't know what to do about it. I don't. If I show the details of the error message, the "executable path" parameter shows /usr/share/apport/apport-gpu-error-intel.py. Is this a bug I should report to Launchpad, or just a configuration error somewhere? If it's a bug, how do I collect the data I (and the devs) need? Update in response to comment: I am running an ASUS N53SN, sporting an Intel Core i7 2630QM CPU and an NVidia GeForce 550M GPU.

    Read the article

  • SEO blog Indexing: wordpress.com subdomain vs a registered domain?

    - by rumspringa00
    I've used WordPress for a few of my client's sites, mostly small businesses and eCommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using WordPress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot WordPress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domain would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong?

    Read the article

  • How to test robots.txt in googlebot to find out what is being indexed

    - by Amar Jarubula
    This question is a continuation for this answer How to check if googlebot will index a given url? As was told I did go to the Webmaster Tools and tested contents of my robots.txt file. However this is just giving me the info if that content is good enough or not. However for my scenario I need to test whether disallowing some patterns is being indexed or not. For example I have something like this below in my robots.txt disallow:/pattern* My understanding is the URLs with word pattern should not crawled, but how do I test this pattern is enforced while indexing the website?

    Read the article

  • Few New Features Added to Geekswithblogs.net

    - by Staff of Geeks
    After reviewing some of the feedback from our bloggers we added a couple new features to Geekswithblogs.net and there are still more to come.  Here is a list of the features we added.   Fixed the Twitter parser to better support URLs and Hash Tags Added some hooks behind the scenes to tags posts with common keywords automatically Added Facebook likes and Tweets to the bottom of every post Cleaned up a few skins Images on the main page for bloggers who use Gravatar or Twitter integration Random bug fixes based on Log   We are definitely working to make Geekswithblogs.net faster and better.  If you have any suggestions, please feel free to share them with the team.  On a side note, if that suggestion is move to WordPress, I will reply to you with stop writing ASP.NET for your day job and move to PHP.  That request is the equivalent in my eyes.  If we have enough bloggers leave the Microsoft .NET Platform for their main source of income, we might consider it.   Technorati Tags: Geekswithblogs.net,Features,Version 4.0

    Read the article

  • How to crawl a webPage with dynamic content added by javascript

    - by blunderboy
    I guess there is a news that Google bots have the capability to understand our javascript code. It means this is possible to fully crawl a webpage which has lazy loading feature enabled. I am using Apache Nutch to crawl websites but I don't think it has the capability to fetch the URLs being injected in HTML page by javascript when the page is scrolled down. I see a lot of websites doing lazy loading for performance issue. So Can somebody please explain me how can i crawl the data which comes in HTML page on lazy load. (On scrolling the page down).

    Read the article

  • How long does it take for Google Webmasters to index site after submitting sitemap? [closed]

    - by Venkatesh Hodavdekar
    Possible Duplicate: Why isn't my website in Google search results? I have submitted my website today into Google search using Google Webmasters using sitemaps. The status on the sitemap says OK and it shows that 12 urls have been recognized. I was wondering how long does it take for the link to get indexed, as the indexed url option says "No data available. Please check back soon." I am not sure if it is showing this message due to some error, or everything is fine.

    Read the article

  • Possible automated Bing Ads fraud?

    - by Gary Joynes
    I run a website that generates life insurance leads. The site is very simple a) there is a form for capturing the user's details, life insurance requirements etc b) A quote comparison feature We drive traffic to our site using conventional Google Adwords and Bing Ads campaigns. Since the 6th January we have received 30-40 dodgy leads which have the following in common: All created between 2 and 8 AM Phone number always in the format "123 1234 1234' Name, Date Of Birth, Policy details, Address all seem valid and are unique across the leads Email addresses from "disposable" email accounts including dodgit.com, mailinator.com, trashymail.com, pookmail.com Some leads come from the customer form, some via the quote comparison feature All come from different IP addresses We get the keyword information passed through from the URLs All look to be coming from Bing Ads All come from Internet Explorer v7 and v8 The consistency of the data and the random IP addresses seem to suggest an automated approach but I'm not sure of the intent. We can handle identifying these leads within our database but is there anyway of stopping this at the Ad level i.e. before the click through.

    Read the article

  • Problem downloading .exe file from Amazon S3 with a signed URL in IE

    - by Joe Corkery
    I have a large collection of Windows exe files which are being stored/distributed using Amazon S3. We use signed URLs to control access to the files and this works great except in one case when trying to download a .exe file using Internet Explorer (version 8). It works just fine in Firefox. It also works fine if you don't use a signed URL (but that is not an option). What happens is that the IE downloader changes the name from 'myfile.exe' to 'myfile[1]' and Windows no longer recognizes it as an executable. Any advice would greatly be appreciated. Thanks.

    Read the article

  • How can I backup my PPAs?

    - by Scaine
    Related to this question. But my concern is that over the past year, most of my more interesting (or used) applications are from PPAs, and just backing up my sources list won't add the associated launchpad keys the way that add-apt-repository does. So I'm looking for a way to list all the PPA urls (like ppa:chromium-daily/stable) so that I can easily script a series of add-apt-repository commands to add them into a new installation gracefully. Short of dumping my bash history of course. Which might be feasible, depending on how far back that file goes back?

    Read the article

  • Is there a way of using HTTPS with Amazon's CloudFront CDN and CNAMEs?

    - by Metalshark
    We use Amazon's CloudFront CDN with custom CNAMEs hanging under the main domain (static1.example.com). Although we can break this uniform appearance and use the original whatever123wigglyw00.cloudfront.net URLs to utilise HTTPS, is there another way? Do Amazon or any other similar provider offer HTTPS CDN hosting? Is TLS and its selective encryption available for use somewhere (SNI: Server Name Indication)? Foot note: assuming that the answer is no, but just in the hope someone knows. EDIT: Now using Google App Engine https://developers.google.com/appengine/docs/ssl for CDN hosting with SSL support.

    Read the article

  • Ripping MP3s in Rhythmbox Ubuntu 12.10 (64 bit)?

    - by James Fellows Yates
    I installed a couple of days ago Ubuntu 12.10 (64 bit). I today tried ripping a CD in the MP3 format. However, whenever I try to rip, it says it is missing an extra multimedia plugin "Gstreamer extra plug-ins (i386)". I then try to install the :i386 version of the gstreamer-ugly plugins, but then I get the same problem but with the id3-demuxer (or something similar) The Terminal output I get from both problems (but replace the "MPEG-1 Layer 3 (MP3) encoder" with the "ID3-demuxer" name) is: james@clefairy:~$ rhythmbox (rhythmbox:24122): GLib-GObject-CRITICAL **: g_object_unref: assertion `G_IS_OBJECT (object)' failed Rhythmbox-Message: Missing plugin: gstreamer|0.10|rhythmbox|MPEG-1 Layer 3 (MP3) encoder|encoder-audio/mpeg, mpegversion=(int)1, layer=(int)3 /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject It doesn't help that each time I have to install/remove the entire Gstreamer-ugly collection each time - I can't find that specific file. The CD plays fine, it's the ripping plugin that doesn't seem to work. I didn't have this problem previously on 12.04 (64 bit).

    Read the article

  • Installing Oracle 11g SOA Suite?

    - by asantaga
    Are you working for an SI like Accenture or Cap Gemini? Are you a sales consultant who needs to install software quickly??? Well I’m sure if your reading this you probably are.. Anyway if your like me, and like many tecchies reading manuals isn't natural to us, we’ll download the software, try to install it and then… ultimately fail.. or take a lot longer than it should..  However never fear help is here! For Oracle 11g SOA Suite (ps3) a good friend of mine , a SOA 11g PM in the states, has written a document, a quick start and its on OTN.. Although the document is PS3 focused, apart from the download URLs its also totally applicable for PS4 too. The document can be found at this link

    Read the article

  • Python service using Upstart on Ubuntu

    - by Soumya Simanta
    I want to create to deploy a heartbeat service (a python script) as a service using Upstart. My understanding is that I've to add a /etc/init/myheartbeatservice.conf with the following contents. # my heartbeat service description "Heartbeat monitor" start on startup stop on shutdown script exec /path/to/my/python/script.py end script My script starts another service process and the monitors the processes and sends heartbeat to an outside server regularly. Are startup and shutdown the correct events ? Also my script create a new thread. I'm assuming I also need to add fork daemon to my conf file? Thanks.

    Read the article

  • My blog which gets 300+ daily impressions has stopped appearing on the 1st page of Google

    - by Sangram
    I have a blog regarding Placement papers from December 2010. My monthly impressions are around 4000. For the last 2 days, my blog has disappeared from Google search engine result pages. Impressions have reduced drastically. Please check Stat reports: My blog is still on the search engine because when I search site: mydomain.com on Google, I can see my all pages indexed over there… But my pages which used to appear on the first or second pages of Google do not appear any more. Example: If I search with query GE round 2 code writing test on bing.com or Yahoo search, the first link on the result page is my blog. But if you do same on Google, my URLs do not appear even on the 1st 3 result pages. I used to get lots of visitors by these search query earlier.

    Read the article

  • SOLVED BleachBit: How to Completely Clear URL History in Firefox?

    - by tSquirrel
    14.04 / Firefox 29.0 I've been using Bleachbit to clear usage/file history, and for the most part it works great. However, it doesn't seem to clear the website hostnames out of the URL, at all. These addresses are not bookmarked. Also, the total URL isn't preserved, just the hostname. Visit site http://www.bluesnews.com/some_random_URL_string Exit Firefox Run Bleachbit, with ALL Firefox options selected Restart Firefox Check history: completely empty, other than bookmarked sites. www.bluesnews is NOT bookmarked Type "blue" which is Firefox automatically completes as "http://www.bluesnews.com/" Alternate Step #3: Use Firefox's built-in "Clear History" and select ALL entries with a time frame of "Everything". Same result as above. My inquiry in BB forums hasn't been responded to. I found Dan's proposed solution, however changing autocomplete in about:config only turns off the function, it doesn't actually stop storing URLs. SOLVED - See my comment in the "Answer" response from Tim

    Read the article

  • Best way to redirect users back to the pretty URL who land on the _escaped_fragment_ one?

    - by Ryan
    I am working on an AJAX site and have successfully implemented Google's AJAX recommendation by creating _escape_fragment_ versions of each page for it to index. Thus each page has 2 URLs: pretty: example.com#!blog ugly: example.com?_escaped_fragment_=blog However, I have noticed in my analytics that some users are arriving on the site via the "ugly" URL and am looking for a clean way to redirect them to the pretty URL without impacting Google's ability to index the site. I have considered using a 301 redirect in the head but fear that Googlebot might try to follow it and end up in an endless loop. I have also considered using a JavaScript redirect that Googlebot wouldn't execute but fear that Google may interpret this as cloaking and penalize the website. Is there a good, clean, acceptable way to redirect real users away from the ugly URL if for some reason or another they end up arriving at the site that way?

    Read the article

  • Why is it taking so long to open the Ubuntu Help Center?

    - by Agmenor
    When I click on the Help Center Icon in the 'System' menu, it takes more than a minute to launch the program. More than a minute, for a text only program seeming like a website! All my other programs work fine, and I saw this problem also on other computers. Is there a reason for this? Will it be fixed? I think it is an important issue for beginners. As a response to Scaine, the result of the command software-center is the following: Traceback (most recent call last): File "/usr/share/software-center/update-software-center-agent", line 72, in <module> db = xapian.WritableDatabase(pathname, xapian.DB_CREATE_OR_OVERWRITE) File "/usr/lib/python2.6/dist-packages/xapian.py", line 3195, in __init__ _xapian.WritableDatabase_swiginit(self,_xapian.new_WritableDatabase(*args)) xapian.DatabaseLockError: Unable to acquire database write lock on /home/agmenor/.cache/software-center/software-center-agent.db.tmp: already locked 2011-01-11 19:57:24,495 - softwarecenter.app - INFO - software-center-agent finished with status 1

    Read the article

  • google webmaster showing 6 pages submitted 0 indexed, yet i can see them all there when i search in google?

    - by sam
    I have a small 'brochure' type site with 6 pages, i can see them all the pages showing up in google when i search for my site. But in webmaster tools under the sitemaps section it says 6 submitted, (the blue bar of the graph), but the indexed pages - the red bar is showing 0 indexed pages, even though they seem to be indexed ? any idea why this is ? I dont really think its that important as the pages are still indexed, but it just seems odd. =================================================== UPDATE 9/3/12 having just looked in google webmaster its showing that there are 11 pages indexed, under the health index status tab.. but under the optimization sitemap tab it shows 6 urls submitted but only 1 indexed ? please see images bellow index status: Sitemap status:

    Read the article

  • Website falsely blocked because of spam. Does anyone know how we should proceed?

    - by Thomas Crepain
    I'm responsible for ICT at FOS Open Scouting, a belgian scouting organisation. Our website was hacked a few years back and blocked by Facebook as a result. After we regained control over the site Facebook continued to block our domain and this is causing us a number of problems. We have tried many times in the past year to contact Facebook using their 'I am blocked from adding content' form (https://www.facebook.com/help/contact.php?show_form=block_appeal) to no avail. The blocked URLs are: http://www.fos.be and http://www.fosopenscouting.be Does anyone know how we should/could proceed?

    Read the article

  • Static HTML to Wordpress Migration SEO Implications?

    - by Kayle
    Recently, I migrated a client's site to a new server and a new home within wordpress so they could more easily edit their website and start a blog section. The static site was 10 years old a was showing up at place #3 for it's primary keyword, consistently, according to my client, and has dropped to rank #6-8 following the migration. At launch, we made sure the urls were identical (save the removal of ".htm" which we used 301 redirects to compensate for) and we generated a new XML map and pinged google with the new site. We keep a 404 log to make sure we're not losing any incoming links. We also have Google Webmaster Tools on this site and have zero errors/suggestions, everything seems ok. I was told by numerous sources that Google would not penalize us for the use of 301s, but it's the only thing I can think of right now that is different about the site, other than the platform. Any ideas about what we could be getting docked for?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >