Search Results

Search found 5416 results on 217 pages for 'urls py'.

Page 149/217 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • mod_rewrite and SEO friendliness

    - by John Doe
    My website has an atypical structure and I'm not sure if this could create problems in the long run, specially for SEO positioning purposes. I have a unique, large PHP script, and I use the Apache module mod_rewrite in the .htaccess file to create friendly URLs, for example: RewriteRule ^$ /index.php?section=Main RewriteRule ^createArticle$ /index.php?section=Main&view=CreateArticle RewriteRule ^configuration$ /index.php?section=Configuration RewriteRule ^article/([0-9]{1,10})$ /index.php?section=Article&view=Default&id=$1 RewriteRule ^deleteArticle/([0-9]{1,10})$ /index.php?section=Article&view=Delete&id=$1 RewriteRule ^reportArticle/([0-9]{1,10})$ /index.php?section=Article&view=Report&id=$1 RewriteRule ^logIn$ /index.php?section=Authentication ... So, www.example.com/index.php?section=Article&view=Default&id=105 would become www.example.com/article/105. The only real physical file is index.php, in which the parameters of the URL queried is processed and the corresponding result is outputted. My question is, do the crawling robots (e.g. Googlebot) recognize these links? Do they index the resulting HTML outputted by index.php with the specified parameters as if it was a actual HTML file? Also, would this become a problem when creating a Sitemap?

    Read the article

  • SEO: Getting site to show in location-specific searches

    - by willvv
    I'm really new to this SEO world and I've been reading a lot to try and figure it out. We have a site moodbond.com that allows users to browse/create events anywhere. And we fill it with content from the main cities in the US. We would like it to show for searches for things like "events in san francisco" or "what to do in new york", however, since the site is not really location-specific, I'm not really sure where to begin. I've been thinking a couple of things, maybe you can help me decide if these would be a good way to start or if I should try something different. 1- Allow something like location-specific urls (e.g. moodbond.com/browse/san-francisco) could just show the main page centered in San Francisco. 2- Change the headers/title of the page so it adapts automatically to the city being browsed (and change this dynamically as the user changes the location of the map). 3- Add internal links to different locations (e.g. add a link at the footer of the page that says "Events in Seattle" that makes the site load events in that city. (this would probably depend on implementing #1). What do you guys think? will any of these really help or should I look for a different approach? any advice is welcome. Thanks

    Read the article

  • GAPI output doesn't match Google Analytics website

    - by Yekver
    I have to get the main info about my Google Analytics Goals. I'm using GAPI lib, with this code: <?php require_once 'conf.inc'; require_once 'gapi.class.php'; $ga = new gapi(ga_email,ga_password); $dimensions = array('pagePath', 'hostname'); $metrics = array('goalCompletionsAll', 'goalConversionRateAll', 'goalValueAll'); $ga->requestReportData(ga_profile_id, $dimensions, $metrics, '-goalCompletionsAll', '', '2012-09-07', '2012-10-07', 1, 500); $gaResults = $ga->getResults(); foreach($gaResults as $result) { var_dump($result); } cut this code is output: object(gapiReportEntry)[7] private 'metrics' => array (size=3) 'goalCompletionsAll' => int 12031 'goalConversionRateAll' => float 206.93154454764 'goalValueAll' => float 0 private 'dimensions' => array (size=2) 'pagePath' => string '/catalogs.php' (length=13) 'hostname' => string 'www.example.com' (length=13) object(gapiReportEntry)[6] private 'metrics' => array (size=3) 'goalCompletionsAll' => int 9744 'goalConversionRateAll' => float 661.05834464043 'goalValueAll' => float 0 private 'dimensions' => array (size=2) 'pagePath' => string '/price.php' (length=10) 'hostname' => string 'www.example.com' (length=13) What I see on Google Analytics website on Goals URLs page with the same period of date is: Goal Completion Location Goal Completions Goal Value 1. /price.php 9,396 $0.00 2. /saloni.php 3,739 $0.00 As you can see outputs doesn't match. Why? What's wrong?

    Read the article

  • Setting up a Google Analytics Campaign

    - by Ashfame
    I will be doing a bunch of things to give one of my projects (main app) a big initial push for which I will be building a few small Facebook apps which will help in promoting the main apps. Traffic from these apps need to be tracked individually. My main app will be posting on the walls when the user needs to be notified. Traffic from these posts need to be tracked. Traffic from emails sent by the main app need to be tracked, like different types of email. I need to track all of these & possibly a couple of more but I need to be sure that I build my campaign URLs correctly as I won't get another chance to fix it. Correct me where I am wrong: Campaign Name: Launch Campaign Medium: Email Campaign Source: Type1 or Type2 (I can break it down for different types of email, right?) For apps: Campaign Name: Launch Campaign Medium: Apps Campaign Source: App1 or App2 (I can break it down here for different apps, right?) What if I want to track two different links within a single email or a single app? Any way of tracking them individually too but still keeping to track them as one because tracking them as one makes more sense for me. Campaign Term & Campaign content is irrelevant in my case, or I can/should use them for something? And I will also be tracking traffic of different apps. Should I do more? Let me know if my scenario wasn't clear enough & I need to explain more.

    Read the article

  • Scripted SOA Diagnostic Dumps for PS6 (11.1.1.7)

    - by ShawnBailey
    When you upgrade to SOA Suite PS6 (11.1.1.7) you acquire a new set of Diagnostic Dumps in addition to what was available in PS5. With more than a dozen to choose from and not wanting to run them one at a time, this blog post provides a sample script to collect them all quickly and hopefully easily. There are several ways that this collection could be scripted and this is just one example. What is Included: wlst.properties: Ant Properties build.xml soa_diagnostic_script.py: Python Script What is Collected: 5 contextual thread dumps at 5 second intervals Diagnostic log entries from the server WLS Image which includes the domain configuration and WLS runtime data Most of the SOA Diagnostic Dumps including those for BPEL runtime, Adapters and composite information from MDS Instructions: Download the package and extract it to a location of your choosing Update the properties file 'wlst.properties' to match your environment Run 'ant' (must be on the path) Collect the zip package containing the files (by default it will be in the script.output location) Properties Reference: oracle_common.common.bin: Location of oracle_common/common/bin script.home: Location where you extracted the script and supporting files script.output: Location where you want the collections written username: User name for server connection pwd: Password to connect to the server url: T3 URL for server connection, '<host>:<port>' dump_interval: Interval in seconds between thread dumps log_interval: Duration in minutes that you want to go back for diagnostic log information Script Package

    Read the article

  • Tracking subdomains in the same profile as the main domain

    - by Osvaldo
    I have a site, let's call it http://www.example.com with a non-universal Google analytics account. Now we have to add new functionalities in a subdomain like https://subdomain.example.com as a micro site. On that subdomain the URL's will be something like https://subdomain.example.com?param1=foo&param2=bar We can't change the requirements as both main site and mini-site use a different CMS/application. This is strictly a Google Analytics question. But we need to count pageviews and events that happen in that subdomain (with URLs like https://subdomain.example.com?param1=foo&param2=bar) as belonging to the main domain. So pageviews and events in https://subdomain.example.com?param1=foo&param2=bar need to be recorded as if they happened in http://www.example.com/path/to/whatever/I/want Fortunately we have full control on JavaScript in the main domain site and in the subdomain site too. How can we make this work? Do we need to change tracking code both in the main domain and subdomains? Do we need to reconfigure Google Analytics? Please note again that we do not want to create a new view for the subdomain. Both mini-site and main site should be in the same account, property and view.

    Read the article

  • Using pkexec policy to run out of /opt/

    - by liberavia
    I still try to make it possible to run my app with root priveleges. Therefore I created two policies to run the application via pkexec (one for /usr/bin and one for /opt/extras... ) and added them to the setup.py: data_files=[('/usr/share/polkit-1/actions', ['data/com.ubuntu.pkexec.armorforge.policy']), ('/usr/share/polkit-1/actions', ['data/com.ubuntu.extras.pkexec.armorforge.policy']), ('/usr/bin/', ['data/armorforge-pkexec'])] ) additionally I added a startscript which uses pkexec for starting the application. It distinguishes between the two places and is used in the Exec-Statement of the desktopfile: #!/bin/sh if [ -f /opt/extras.ubuntu.com/armorforge/bin/armorforge ]; then pkexec "/opt/extras.ubuntu.com/armorforge/bin/armorforge" "$@" else pkexec `which armorforge` "$@" fi If I simply do a quickly package everything will work right. But if I package with extras option: quickly package --extras the Exec-statement will be exchanged. Even if I try to simulate the pkexec call via armorforge-pkexec It will aks for a password and then returns this: andre@andre-desktop:~/Entwicklung/Ubuntu/armorforge$ armorforge-pkexec (armorforge:10108): GLib-GIO-ERROR **: Settings schema 'org.gnome.desktop.interface' is not installed Trace/breakpoint trap (core dumped) So ok, I could not trick the opt-thing. How can I make sure, that my Application will run with root priveleges out of opt. I copied the way of using pkexec from synaptic. My application is for communicating with apparmor which currently has no dbus interface. Else I need to write into /etc/apparmor.d-folder. How should I deal with the opt-build which, as far as I understand, is required to submit my application to the ubuntu software center. Thanks for any hints and/or links :-)

    Read the article

  • PyGTK/Quickly Add string to ListStore

    - by AllRadioisDead
    I'm trying to build an application that will prompt the user for a string, and then add that string to a Scrolling Listview object using quickly and PyGTK. I've been following this tutorial: http://developer.ubuntu.com/resources/app-developer-cookbook/multimedia/creating-a-simple-media-player/ When I hit the add button, the prompt comes up properly and I'm able to enter the string. The column appears correctly but the list ends up being blank. What am I doing wrong? import gettext from gettext import gettext as _ gettext.textdomain('spiderweb') from gi.repository import Gtk # pylint: disable=E0611 import logging logger = logging.getLogger('spiderweb') from spiderweb_lib import Window from spiderweb.AboutSpiderwebDialog import AboutSpiderwebDialog from spiderweb.PreferencesSpiderwebDialog import PreferencesSpiderwebDialog from quickly import prompts from quickly.widgets.dictionary_grid import DictionaryGrid import os # See spiderweb_lib.Window.py for more details about how this class works class SpiderwebWindow(Window): __gtype_name__ = "SpiderwebWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(SpiderwebWindow, self).finish_initializing(builder) self.AboutDialog = AboutSpiderwebDialog self.PreferencesDialog = PreferencesSpiderwebDialog # Code for other initialization actions should be added here. self.supported_web_formats = [".net",".html", ".com"] def on_addbutton_clicked(self, widget, data=None): #let the user choose a path with the directory chooser response, string = prompts.string("Enter a string", "Please enter string:", "Sample Text") #make certain the user said ok before working if response == Gtk.ResponseType.OK: #make a list of the supported media files media_files = Gtk.ListStore(str) #add a dictionary to the list of media files media_files.append({"String":string}) #remove any children in scrolled window for c in self.ui.scrolledwindow1.get_children(): self.ui.scrolledwindow1.remove(c) #create the grid with list of dictionaries #only show the File column media_grid = DictionaryGrid(media_files, keys=["File"]) #show the grid, and add it to the scrolled window media_grid.show_all() self.ui.scrolledwindow1.add(media_grid)

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • 404s on password protected content

    - by tjb1982
    I'm new to WordPress and SEO, generally, but we've been running into problems with our site that don't seem to make sense to me. The problem is that our editor likes to schedule posts and/or mark them private until she is ready to make them public, but somehow Google is crawling these posts and getting 404s (because they are password protected). How does Google know they exist in the first place? I checked the sitemap.xml file and don't see a record of the post. One of the offending posts was marked public, but is scheduled for a future date. Could that have something to do with it? I've tried to Google the answer, and I came up with a good amount of reassurance that this won't hurt the site, but I'm still wondering how it's happening in the first place. It's hard because I don't know exactly what the editor's workflow is. Is it possible she's posting publicly first and then revising it to be private only after it's too late? Does anyone know how Google finds WordPress URLs it shouldn't have access to?

    Read the article

  • Google Analytics: Do unique events report as unique visits when triggered on pages other than your own domain?

    - by Jesse Gardner
    We just recently attached a SWF to our Brightcove video player to report various events back to Google Analytics. We're also tracking page views with a standard GA snippet on the page where the player is embedded. As I understand it, because a unique has already been recorded for the page, any event being triggered by the player is getting associated with that unique. However, we allow people to embed the video player on other websites. All of the event data started pouring into the Events section as expected, but we noticed a dramatic uptick in unique visitors on the site (nearly double) while the pageview count stayed relatively unchanged. Disabling event tracking brought the traffic back down to average levels. I should also add that in the Pages section of Event tracking we're seeing URLs for other sites where the player has been embedded; but this data isn't showing up in the Content section. It seems counterintuitive, but does GA count an event fired as a unique visit even if it's triggered from some place other than your website? Is so, there any way to trigger an event in the events section without it reporting to the unique visitor count?

    Read the article

  • Installing Django on Windows

    - by Pranav
    Ever needed to install Django in a Microsoft Windows environment, here is a quick start guide to make that happen: Read through the official Django installation documentation, it might just save you a world of hut down the road. Download Python for your version of Windows. Install Python, my preference here is to put it into the Program Files folder under a folder named Python<Version> Add your chosen Python installation path into your Windows path environment variable. This is an optional step, however it allows you to just type python in the command line and have it fire up the Python interpreter. An easy way of adding it is going into Control Panel, System and into the Environment Variables section. Download Django, you can either download a compressed file or if you’re comfortable with using version control – check it out from the Django Subversion repository. Create a folder named django under your <Python installation directory>\Lib\site-packages\ folder. Using my example above that would have been C:\Program Files\Python25\Lib\site-packages\. If you chose to download the compressed file, open it and extract the contents of the django folder into your newly created folder. If you’d prefer to check it out from Subversion, the normal check out points are http://code.djangoproject.com/svn/django/trunk/ for the latest development copy or a named release which you’ll find under http://code.djangoproject.com/svn/django/tags/releases/. Done, you now have a working Django installation on Windows. At this point, it’d be pertinent to confirm that everything is working properly, which you can do by following the first Django tutorial. The tutorial will make mention of django-admin.py, which is a utility which offers some basic functionality to get you off the ground. The file is located in the bin folder under your Django installation directory. When you need to use it, you can either type in the full path to it or simply add that file path into your environment variables as well. Hope this helps!

    Read the article

  • Using .htaccess, can you hide the true URL?

    - by Richard Muthwill
    So I have a web hotel with 1 main website http://www.myrootsite.com/ and a few websites in subdirectories, in a folder called projects. I have domain names pointing to the subdirectories, but when holding the mouse over a link in those websites the URLs are shown as: http://www.myrootsite.com/projects/mysubsite/contact.html When I'm on mysubsite.com I want them to be shown as: http://www.mysubsite.com/contact.html I spoke to support for the web hotel and the guy said try using .htaccess, but I'm not sure exactly how to do this. Thank you very much for your time! Edit: For more information My website is: http://www.example1.com/ and I also own http://www.example2.com/. All of example2.com's files are in: example1.com/projects/example2/. When you visit example2.com, you'll notice all of the URL's point towards: example1.com/projects/example2/ but I want them to point towards: example2.com/ Can this be done? I hope this is enough info for you to go on :). Edit: For w3d I go to the url mysubsite.com and the browser shows the url mysubsite.com. The services I'm using create an iframe around myrootsite.com and use the url mysubsite.com I just hate that in Firefox and Internet Explorer, holding the mouse over link show that the destination url is: myrootsite.com/projects/mysubsite/...

    Read the article

  • Non-dynamic CMS [closed]

    - by user20457
    Some of the web sites I visit every day (news, sports, etc..), although the content changes very often (several times per day), the URLs always have .html extension, what makes me thing that the content has been generated once, and then published as a static page, rather than generated in every call, or even cached in memory. For example, the fictitious site "mysports.com" have a "futbol.html" page, and then yesterday Messi gets injured and they have another thing to put in that page, then I presume they post the new item in their CMS system, and automatically a publishing action is triggered aftewards that recreates "futbol.html" in a CDN with the new item and probably discard the oldest one. Then the ETag changes and clients will get the new page if they try to access it. (the site is fictitious but this is what I believe happened yesterday in the sports site I read) This would fit in the CQRS approach, and I presume they have a huge performance. I know lots of CMS (WP, Drupal, BlogEngine.net, DNN, etc...), but I have never seen any able of doing this, or at least, I was not aware this feautre. How are called those distributed CMS? Which are the most well known? Cheers.

    Read the article

  • Why do Google search results include pages disallowed in robots.txt?

    - by Ilmari Karonen
    I have some pages on my site that I want to keep search engines away from, so I disallowed them in my robots.txt file like this: User-Agent: * Disallow: /email Yet I recently noticed that Google still sometimes returns links to those pages in their search results. Why does this happen, and how can I stop it? Background: Several years ago, I made a simple web site for a club a relative of mine was involved in. They wanted to have e-mail links on their pages, so, to try and keep those e-mail addresses from ending up on too many spam lists, instead of using direct mailto: links I made those links point to a simple redirector / address harvester trap script running on my own site. This script would return either a 301 redirect to the actual mailto: URL, or, if it detected a suspicious access pattern, a page containing lots of random fake e-mail addresses and links to more such pages. To keep legitimate search bots away from the trap, I set up the robots.txt rule shown above, disallowing the entire space of both legit redirector links and trap pages. Just recently, however, one of the people in the club searched Google for their own name and was quite surprised when one of the results on the first page was a link to the redirector script, with a title consisting of their e-mail address followed by my name. Of course, they immediately e-mailed me and wanted to know how to get their address out of Google's index. I was quite surprised too, since I had no idea that Google would index such URLs at all, seemingly in violation of my robots.txt rule. I did manage to submit a removal request to Google, and it seems to have worked, but I'd like to know why and how Google is circumventing my robots.txt like that and how to make sure that none of the disallowed pages will show up in their search results. Ps. I actually found out a possible explanation and solution, which I'll post below, while preparing this question, but I thought I'd ask it anyway in case someone else might have the same problem. Please do feel free to post your own answers. I'd also be interested in knowing if other search engines do this too, and whether the same solutions work for them also.

    Read the article

  • How can I redirect all files in a directory that doesn't conform to a certain filename structure?

    - by user18842
    I have a website where a previous developer had updated several webpages. The issue is that the developer had made each new webpage with new filenames, and deleted the old filenames. I've worked with .htaccess redirects for a few months now, and have some understanding of the usage, however, I am stumped with this task. The old pages were named like so: www.domain.tld/subdir/file.html The new pages are named: www.domain.tld/subdir/file-new-name.html The first word of all new files is the exact name of the old file, and all new files have the same last 2 words. www.domain.tld/subdir/file1-new-name.html www.domain.tld/subdir/file2-new-name.html www.domain.tld/subdir/file3-new-name.html ect. We also need to be able to access the url: www.domain.tld/subdir/ The new files have been indexed by google (the old urls cause 404s, and need redirected to the new so that google will be friendly), and the client wants to keep the new filenames as they are more descriptive. I've attempted to redirect it in many different ways without success, but I'll show the one that stumps me the most RewriteBase / RewriteCond %{THE_REQUEST} !^subdir/.*\-new\-name\.html RewriteCond %{THE_REQUEST} !^subdir/$ RewriteRule ^subdir/(.*)\.html$ http://www.domain.tld/subdir/$1\-new\-name\.html [R=301,NC] When visiting www.domain.tld/subdir/file1.html in the browser, this causes a 403 Forbidden error with a url like so: www.domain.tld/subdir/file1-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name-new-name.html I'm certain it's probably something simple that I'm overlooking, can someone please help me get a proper redirect? Thanks so much in advance! EDIT I've also got all the old filenames saved on a separate document in case I need them set up like the following example: (file(1|2|3|4|5)|page(1|2|3|4|5)|a(l(l|lowed|ter)|ccept)

    Read the article

  • Authenticate with Django 1.5

    - by gorjuce
    I'm currently testing django 1.5 and a custom User model, but I've some problems. I've created a User class in my account app, which looks like: class User(AbstractBaseUser): email = models.EmailField() activation_key = models.CharField(max_length=255) is_active = models.BooleanField(default=False) is_admin = models.BooleanField(default=False) USERNAME_FIELD = 'email' I can correctly register a user, who is stored in my account_user table. Now, how can I log in? I've tried with: def login(request): form = AuthenticationForm() if request.method == 'POST': form = AuthenticationForm(request.POST) email = request.POST['username'] password = request.POST['password'] user = authenticate(username=email, password=password) if user is not None: if user.is_active: login(user) else: message = 'disabled account, check validation email' return render( request, 'account-login-failed.html', {'message': message} ) return render(request, 'account-login.html', {'form': form}) I can correctly register a new User My forms.py which contains my register form class RegisterForm(forms.ModelForm): """ a form to create user""" password = forms.CharField( label="Password", widget=forms.PasswordInput() ) password_confirm = forms.CharField( label="Password Repeat", widget=forms.PasswordInput() ) class Meta: model = User exclude = ('last_login', 'activation_key') def clean_password_confirm(self): password = self.cleaned_data.get("password") password_confirm = self.cleaned_data.get("password_confirm") if password and password_confirm and password != password_confirm: raise forms.ValidationError("Password don't math") return password_confirm def clean_email(self): if User.objects.filter(email__iexact=self.cleaned_data.get("email")): raise forms.ValidationError("email already exists") return self.cleaned_data['email'] def save(self): user = super(RegisterForm, self).save(commit=False) user.password = self.cleaned_data['password'] user.activation_key = generate_sha1(user.email) user.save() return user My question is: Why does authenticate give me None? I know I'm trying to authenticate() with an email as username but is that not one of the reasons to use a custom User model?

    Read the article

  • Optimising news fetching

    - by aceBox
    I have a web scraper for scraping news from different sources in wp7. My current appraoch for doing this is: load newspapers information from xml file. go to the specified sections and fetch the urls of the news items. go to each url and fetch headline, image, publisher. display using a MVVM architecture of windows phone. The whole thing takes place asynchronously...meaning as soon as url from a section of a newspaper is fetched it is added to the queue, and the second stage consisting of fetching headline, image etc starts... and as soon this is fetched even for one article, it is displayed. Later on as more articles are fetched, they are added on to the list. For the fetching purpose I am using a SmartThreadPool(http://www.codeproject.com/Articles/7933/Smart-Thread-Pool) for windows phone. My problem is that...even for fetching around 80 items (in total) from 9 publications, it is taking more than a minute. How can i speed up the procedure? Note: I have a two stage approach because many times the images are not available with headlines, and are only found in the article.

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • Can't connect nonlocally after 12.10 upgrade

    - by user101815
    I've just upgraded one of my systems from 12.04 to 12.10. Now I can't connect on that system beyond my local network. Connections within the local network seem to work fine, and I can make nonlocal connections from other machines (like the one I'm asking this question from). I suspect that some routing information has been messed up, but I don't know where to look for it. It's not a nameserver problem -- pinging outside sites by their IP addresses doesn't work either. I have another laptop next to this one, also running Kubuntu 12.10. On the one that can't connect, arp produces no output. On the other one, it produces 192.168.0.1 ether 00:23:69:fa:ce:ae C wlan0 On the working machine, the output of netstat starts with some tcp entries. On the nonworking one, those entries are absent. I asked this question on the Ubuntu forum but haven't gotten any answers there. One further complication: since the troublesome machine has no outside connection, it's extremely difficult to download anything to it. For what it's worth, "ping 8.8.8.8" produces "connect: Network is unreachable". Update: after a lot of fiddling, I have my external world back. I don't know what the key action was, but the first indication of progress was that "ping 8.8.8.8" worked. At that point I still didn't have a working nameserver, so external URLs didn't work. But I did this (based on an online post, of course): sudo dpkg-reconfigure resolvconf and answered Yes to all prompts. That did the trick!! Apparently my problem was unique, or close to it, since I couldn't find any online references to it: local net working, remote net not working, including explicit IP addresses. So I suppose that if no one else has this problem, no one cares about the solution!!

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

  • How to build a good service layer in ASP.NET?

    - by Swippen
    I have looked through some questions, technologies for building a good service layer but I have some questions regarding this that I need help with. First some information of what I have for requirements. We currently have a number of web applications that talk to each other in a spiderweb looking way (all talking to each other in a confusing way via webservices and database data). We want to change this so that all applications go through a service layer where we can work more with cache and encapsulate common functionality and more. We want this layer to also have a Web API so that 3rd party clients can consume information from the service. The problem I see is that if we build the service layer with say MVC4 Web API don't we need to communicate between the application using the webAPI meaning we have to construct URLs and consume JSON/Xml. That does not sound too effective. I assume a better method would be working with entities and WCF to communicate between the application but then we might loose the Web API magic? So the question is if there is a way to consume a service layer as both a Web API (JSON/XML) and as a more backend service layer with entities. If we are forced to use 2 different service layers we might have to duplicate some functionality and other bad things. Hope the question is clear enough and please ask if you need any more information.

    Read the article

  • Role based access to resources for a RESTful service

    - by mutex
    I'm still wrapping my head around REST, but I wonder if someone can help with any suggestions or approaches to role based access control for a RESTful service, particularly from the point of view of securing the data and how the URLs might look. It's probably best to consider an example: Say I have a REST service for Customers, and want to split the users of this REST service into Admin, Editor and Reader roles: Admins can change all attributes of a Customer resource Editors can change only some Readers can only view them. Access control rights are assigned to the Customers entities individually. So for example a user of the service might have admin rights to Customers 1,2 and 3 but Editor access to 4,5 and Reader access to 7,8,9. Now consider the user calling the service. What is a good way to seperate the list of Customers for the current User? GET /Customer - this might get a list of all customers that the current user has Admin\Editor\Reader access to. But then on each Customer the consumer would need an indication of what role they have. Or would it be "better" having something like GET /Customer/Admin - return all customers the current user has Admin access to. Just looking for some high level pointers or reading on a decent way to secure\filter the resources based on roles of the current user.

    Read the article

  • How should I structure a site with content dependent on visitor type (not user)?

    - by Pedr
    I have a website that displays different content depending on two selections made by a visitor: Whether they are a teacher or student, and their learning level (from 4 options). Everything is public and they don't need to authenticate to access the content. Depending on their selection, different content is displayed across the whole site, other than a contact and about page. The tone of the language changes depending on whether the visitor is a student or teacher and the materials available on each page also change depending on the learning level, however in all cases, the structure of the site is identical. Currently I'm using a cookie to store the visitor's selections and render different content appropriately, so I have a single set of URLs which display different content depending on the cookie, with one of the permutations as default. I appreciate this is far from ideal, but what is the better option? Would I be better using a distinguishing segment for each selection, for example: http://example.com/teacher/lv3/resources/activities http://example.com/teacher/lv4/resources/activities http://example.com/student/lv4/resources/activities etc. What is the most sensible way to handle this situation?

    Read the article

  • Website migration from WordPress to a static site and doing 301 redirects without access to existing site?

    - by user3114468
    Currently working on a project that is a hosted on WordPress that is being migrated to a static site. However I presently do not have access to the existing site as it's managed by another developer. The concern is not the lack of having access to content as the site owner has generated very little content (reason for migration) and we were able to do this manually. Rather the concern is to do 301 redirects. The site will not change domains but URLs such as from example.com/?page_id=3 to example.com/services. To add, the site is migrating to new server using same domain name. I thought maybe this could be done via editing permalinks prior to migration and WordPress would update automatically if configured to write on server. But if not configured (as this is not always the case) I do not have htaccess to fix it in case there are suddenly a bunch of 404 errors for every page. Really could use some help on the best procedure to follow in this case. This is the first migration project I've worked on.

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >