Search Results

Search found 4147 results on 166 pages for 'meta keywords'.

Page 44/166 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • tastypie posting and full example

    - by Justin M
    Is there a full tastypie django example site and setup available for download? I have been wrestling with wrapping my head around it all day. I have the following code. Basically, I have a POST form that is handled with ajax. When I click "submit" on my form and the ajax request runs, the call returns "POST http://192.168.1.110:8000/api/private/client_basic_info/ 404 (NOT FOUND)" I have the URL configured alright, I think. I can access http://192.168.1.110:8000/api/private/client_basic_info/?format=json just fine. Am I missing some settings or making some fundamental errors in my methods? My intent is that each user can fill out/modify one and only one "client basic information" form/model. a page: {% extends "layout-column-100.html" %} {% load uni_form_tags sekizai_tags %} {% block title %}Basic Information{% endblock %} {% block main_content %} {% addtoblock "js" %} <script language="JavaScript"> $(document).ready( function() { $('#client_basic_info_form').submit(function (e) { form = $(this) form.find('span.error-message, span.success-message').remove() form.find('.invalid').removeClass('invalid') form.find('input[type="submit"]').attr('disabled', 'disabled') e.preventDefault(); var values = {} $.each($(this).serializeArray(), function(i, field) { values[field.name] = field.value; }) $.ajax({ type: 'POST', contentType: 'application/json', data: JSON.stringify(values), dataType: 'json', processData: false, url: '/api/private/client_basic_info/', success: function(data, status, jqXHR) { form.find('input[type="submit"]') .after('<span class="success-message">Saved successfully!</span>') .removeAttr('disabled') }, error: function(jqXHR, textStatus, errorThrown) { console.log(jqXHR) console.log(textStatus) console.log(errorThrown) var errors = JSON.parse(jqXHR.responseText) for (field in errors) { var field_error = errors[field][0] $('#id_' + field).addClass('invalid') .after('<span class="error-message">'+ field_error +'</span>') } form.find('input[type="submit"]').removeAttr('disabled') } }) // end $.ajax() }) // end $('#client_basic_info_form').submit() }) // end $(document).ready() </script> {% endaddtoblock %} {% uni_form form form.helper %} {% endblock %} resources from residence.models import ClientBasicInfo from residence.forms.profiler import ClientBasicInfoForm from tastypie import fields from tastypie.resources import ModelResource from tastypie.authentication import BasicAuthentication from tastypie.authorization import DjangoAuthorization, Authorization from tastypie.validation import FormValidation from tastypie.resources import ModelResource, ALL, ALL_WITH_RELATIONS from django.core.urlresolvers import reverse from django.contrib.auth.models import User class UserResource(ModelResource): class Meta: queryset = User.objects.all() resource_name = 'user' fields = ['username'] filtering = { 'username': ALL, } include_resource_uri = False authentication = BasicAuthentication() authorization = DjangoAuthorization() def dehydrate(self, bundle): forms_incomplete = [] if ClientBasicInfo.objects.filter(user=bundle.request.user).count() < 1: forms_incomplete.append({'name': 'Basic Information', 'url': reverse('client_basic_info')}) bundle.data['forms_incomplete'] = forms_incomplete return bundle class ClientBasicInfoResource(ModelResource): user = fields.ForeignKey(UserResource, 'user') class Meta: authentication = BasicAuthentication() authorization = DjangoAuthorization() include_resource_uri = False queryset = ClientBasicInfo.objects.all() resource_name = 'client_basic_info' validation = FormValidation(form_class=ClientBasicInfoForm) list_allowed_methods = ['get', 'post', ] detail_allowed_methods = ['get', 'post', 'put', 'delete'] Edit: My resources file is now: from residence.models import ClientBasicInfo from residence.forms.profiler import ClientBasicInfoForm from tastypie import fields from tastypie.resources import ModelResource from tastypie.authentication import BasicAuthentication from tastypie.authorization import DjangoAuthorization, Authorization from tastypie.validation import FormValidation from tastypie.resources import ModelResource, ALL, ALL_WITH_RELATIONS from django.core.urlresolvers import reverse from django.contrib.auth.models import User class UserResource(ModelResource): class Meta: queryset = User.objects.all() resource_name = 'user' fields = ['username'] filtering = { 'username': ALL, } include_resource_uri = False authentication = BasicAuthentication() authorization = DjangoAuthorization() #def apply_authorization_limits(self, request, object_list): # return object_list.filter(username=request.user) def dehydrate(self, bundle): forms_incomplete = [] if ClientBasicInfo.objects.filter(user=bundle.request.user).count() < 1: forms_incomplete.append({'name': 'Basic Information', 'url': reverse('client_basic_info')}) bundle.data['forms_incomplete'] = forms_incomplete return bundle class ClientBasicInfoResource(ModelResource): # user = fields.ForeignKey(UserResource, 'user') class Meta: authentication = BasicAuthentication() authorization = DjangoAuthorization() include_resource_uri = False queryset = ClientBasicInfo.objects.all() resource_name = 'client_basic_info' validation = FormValidation(form_class=ClientBasicInfoForm) #list_allowed_methods = ['get', 'post', ] #detail_allowed_methods = ['get', 'post', 'put', 'delete'] def apply_authorization_limits(self, request, object_list): return object_list.filter(user=request.user) I made the user field of the ClientBasicInfo nullable and the POST seems to work. I want to try updating the entry now. Would that just be appending the pk to the ajax url? For example /api/private/client_basic_info/21/? When I submit that form I get a 501 NOT IMPLEMENTED message. What exactly haven't I implemented? I am subclassing ModelResource, which should have all the ORM-related functions implemented according to the docs.

    Read the article

  • Amazon Product API: "Your request is missing a required parameter combination" on Blended ItemSearch

    - by Daniel Schaffer
    I'm having some problems trying to do an ItemSearch on the Blended index using the Amazon Product API. According to the documentation, Blended requests cannot specify the MerchantId parameter - and indeed, if I try to include it I get an error telling me so. However, when I don't include it, I get an error telling me that my request is missing a required parameter combination and that a valid combination includes MerchantId... what the hell? Here's the XML response: <Items xmlns="http://webservices.amazon.com/AWSECommerceService/2005-10-05"> <Request> <IsValid>False</IsValid> <ItemSearchRequest> <Availability>Available</Availability> <Condition>All</Condition> <Keywords> home theater pc and other geekery</Keywords> <ResponseGroup>Similarities</ResponseGroup> <ResponseGroup>SalesRank</ResponseGroup> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>Small</ResponseGroup> <ResponseGroup>Images</ResponseGroup> <SearchIndex>Blended</SearchIndex> </ItemSearchRequest> <Errors> <Error> <Code>AWS.MissingParameterCombination</Code> <Message>Your request is missing a required parameter combination. Required parameter combinations include MerchantId, Availability.</Message> </Error> </Errors> </Request> </Items> The failing requests are being sent as part of batches with other requests that are succeeding. I'm using REST to send my requests, so here's an example of a request: http://ecs.amazonaws.com/onca/xml?AWSAccessKeyId=-------------& ItemSearch.1.Keywords=Mates%20of%20State& ItemSearch.1.MerchantId=Amazon& ItemSearch.1.SearchIndex=DVD& ItemSearch.2.Keywords=teaching%20Lily%20various%20computer%20related%20skills& ItemSearch.2.SearchIndex=Blended& ItemSearch.Shared.Availability=Available& ItemSearch.Shared.Condition=All& ItemSearch.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary%2CSimilarities& Operation=ItemSearch%2CSimilarityLookup& Service=AWSECommerceService& SimilarityLookup.1.ItemId=B000FNNHZ2& SimilarityLookup.2.ItemId=B000EQ5UPU& SimilarityLookup.Shared.Availability=Available& SimilarityLookup.Shared.Condition=All& SimilarityLookup.Shared.MerchantId=Amazon& SimilarityLookup.Shared.ResponseGroup=Small%2CSalesRank%2CImages%2COfferSummary& Timestamp=2010-04-02T17%3A18%3A05Z& Signature=---------------- Any ideas as to what I'm doing wrong?

    Read the article

  • A Look at the GridView's New Sorting Styles in ASP.NET 4.0

    Like every Web control in the ASP.NET toolbox, the GridView includes a variety of style-related properties, including CssClass, Font, ForeColor, BackColor, Width, Height, and so on. The GridView also includes style properties that apply to certain classes of rows in the grid, such as RowStyle, AlternatingRowStyle, HeaderStyle, and PagerStyle. Each of these meta-style properties offer the standard style properties (CssClass, Font, etc.) as subproperties. In ASP.NET 4.0, Microsoft added four new style properties to the GridView control: SortedAscendingHeaderStyle, SortedAscendingCellStyle, SortedDescendingHeaderStyle, and SortedDescendingCellStyle. These four properties are meta-style properties like RowStyle and HeaderStyle, but apply to column of cells rather than a row. These properties only apply when the GridView is sorted - if the grid's data is sorted in ascending order then the SortedAscendingHeaderStyle and SortedAscendingCellStyle properties define the styles for the column the data is sorted by. The SortedDescendingHeaderStyle and SortedDescendingCellStyle properties apply to the sorted column when the results are sorted in descending order. These four new properties make it easier to customize the appearance of the column by which the data is sorted. Using these properties along with a touch of Cascading Style Sheets (CSS) it is possible to add up and down arrows to the sorted column's header to indicate whether the data is sorted in ascending or descending order. Likewise, these properties can be used to shade the sorted column or make its text bold. This article shows how to use these four new properties to style the sorted column. Read on to learn more! Read More >

    Read the article

  • 'rman' cheat-sheet and rlwrap completion

    - by katsumii
    I started using 'rlwrap' some monthes ago like one of my colleague does.bash-like features in sqlplus, rman and other Oracle command line tools (Oracle Luxembourg Core Tech' Blog by Gilles Haro)One can find specific Oracle extension for databases 9i, 10g and 11g (keyword textfile) over here. This will avoid you the need to create this .oracle_keywords file.There is 'rman' keyword file in the link above. I experimented a little and found some missing keywords which are:MAXCORRUPTION PRIMARY NOCFAU VIRTUAL COMPRESSION FOREIGN With these words added, 'rman' works like this:$ rlwrap -f ~/rman $ORACLE_HOME/bin/rman Recovery Manager: Release 11.2.0.3.0 - Production on Mon Dec 3 02:56:04 2012 Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved. RMAN> <-- Hit TAB Display all 211 possibilities? (y or n) As you can guess, this completion is not context aware.I found these missing words by creating a kind of 'cheat sheet' for rman with the script like below. This sheet contains list of verbs and 1st operands. I uploaded to here so one can create a coffee cup with a lot of esoteric words printed on :)validWords() { sed -n 's/^RMAN-01009: syntax error: found "identifier": expecting one of: //p' \ | sed -r 's/double-quoted-string, single-quoted-string/Some String/;s/, /" "/g;s/""//' } echo "Bogus" | rman | validWords > /tmp/rman.$$ for i in $(cat /tmp/rman.$$) do i=$(echo $i | tr -d '"') echo "#### $i ####" echo "$i Bogus" | rman | validWords done One can find more keywords in the document here.

    Read the article

  • flickr, other account types not appearing in online-accounts

    - by Fen
    Using Shotwell, I discovered that to publish to Flickr I need to set up an online account. But the online-accounts system settings only has support for Google, Facebook, Windows Live, Microsoft Exchange and Enterprise Login (Kerberos). How do I add account types? These appear to be properly installed (dpkg-reconfigure returns silently): gnome-control-center-signon is already the newest version. account-plugin-yahoo is already the newest version. account-plugin-flickr is already the newest version. Here's the config file (I think): > cat /usr/share/applications/gnome-online-accounts-panel.desktop [Desktop Entry] Name=Online Accounts Comment=Manage online accounts Exec=gnome-control-center online-accounts Icon=goa-panel Terminal=false Type=Application StartupNotify=true Categories=GNOME;GTK;Settings;DesktopSettings;X-GNOME-Settings-Panel;X-GNOME-PersonalSettings; OnlyShowIn=GNOME;XFCE X-GNOME-Bugzilla-Bugzilla=GNOME X-GNOME-Bugzilla-Product=gnome-control-center X-GNOME-Bugzilla-Component=Online Accounts X-GNOME-Bugzilla-Version=3.4.2 X-GNOME-Settings-Panel=online-accounts # Translators: those are keywords for the online-accounts control-center panel Keywords=Google;Facebook;Flickr;Twitter;Yahoo;Web;Online;Chat;Calendar;Mail;Contact; X-Ubuntu-Gettext-Domain=gnome-control-center-2.0 History: Started out with Ubuntu (64-bit), then in 12.04 installed xubuntu-desktop and have been using that. Upgraded to 12.10.

    Read the article

  • shared hosting: ssl domain receives all server's ssl requests, google gets it wrong

    - by pixeline
    This website is hosted on a shared server. I've recently had my hosting provider secure our website using SSL (https://domain.com instead of http://domain.com). Ever since then, all https requests sent to the server are redirected to my website. https://otherdomain.com leads to a warning, then, if you continue, to my website. Ok, my fault, i should have known SSL means 1 IP, otherwise, this thing can happen. But... Google Search results for my website's target keywords is now displaying these websites above my own, even though they have nothing even remotely related to the target keywords! already done: provide canonical url in the html page. told the problem to the server manager, who tells me it's normal but he'll look for a solution. This was one week ago, no answer since. I have no idea why Google is providing these https urls: i thought someone would have to submit them, or have them inside an html page in order for Google to actually index dummy https domains, but i see no reason why someone would do that. Any suggestion on how to solve this situation? Golive is in one week and SEO looks really bad because of that.

    Read the article

  • A Look at the GridView's New Sorting Styles in ASP.NET 4.0

    Like every Web control in the ASP.NET toolbox, the GridView includes a variety of style-related properties, including CssClass, Font, ForeColor, BackColor, Width, Height, and so on. The GridView also includes style properties that apply to certain classes of rows in the grid, such as RowStyle, AlternatingRowStyle, HeaderStyle, and PagerStyle. Each of these meta-style properties offer the standard style properties (CssClass, Font, etc.) as subproperties. In ASP.NET 4.0, Microsoft added four new style properties to the GridView control: SortedAscendingHeaderStyle, SortedAscendingCellStyle, SortedDescendingHeaderStyle, and SortedDescendingCellStyle. These four properties are meta-style properties like RowStyle and HeaderStyle, but apply to column of cells rather than a row. These properties only apply when the GridView is sorted - if the grid's data is sorted in ascending order then the SortedAscendingHeaderStyle and SortedAscendingCellStyle properties define the styles for the column the data is sorted by. The SortedDescendingHeaderStyle and SortedDescendingCellStyle properties apply to the sorted column when the results are sorted in descending order. These four new properties make it easier to customize the appearance of the column by which the data is sorted. Using these properties along with a touch of Cascading Style Sheets (CSS) it is possible to add up and down arrows to the sorted column's header to indicate whether the data is sorted in ascending or descending order. Likewise, these properties can be used to shade the sorted column or make its text bold. This article shows how to use these four new properties to style the sorted column. Read on to learn more! Read More >

    Read the article

  • Does JAXP natively parse HTML?

    - by ikmac
    So, I whip up a quick test case in Java 7 to grab a couple of elements from random URIs, and see if the built-in parsing stuff will do what I need. Here's the basic setup (with exception handling etc omitted): DocumentBuilderFactory dbfac = DocumentBuilderFactory.newInstance(); DocumentBuilder dbuild = dbfac.newDocumentBuilder(); Document doc = dbuild.parse("uri-goes-here"); With no error handler installed, the parse method throws exceptions on fatal parse errors. When getting the standard Apache 2.2 directory index page from a local server: a SAXParseException with the message White spaces are required between publicId and systemId. The doctype looks ok to me, whitespace and all. When getting a page off a Drupal 7 generated site, it never finishes. The parse method seems to hang. No exceptions thrown, never returns. When getting http://www.oracle.com, a SAXParseException with the message The element type "meta" must be terminated by the matching end-tag "</meta>". So it would appear that the default setup I've used here doesn't handle HTML, only strictly written XML. My question is: can JAXP be used out-of-the-box from openJDK 7 to parse HTML from the wild (without insane gesticulations), or am I better off looking for an HTML 5 parser? PS this is for something I may not open-source, so licensing is also an issue :(

    Read the article

  • Building a common syntax and scoping framework.

    - by Ben DeMott
    Hello fellow programmers, I was discussing a project the other day with a colleague of mine and I was curious to see what others had to say or if such a thing already existed. Background There are many programming languages. There are many IDE's and source editors that highlight and edit source code. Following perfectly and exactly the rules of a language to present auto-complete options and understand scopes in the code is rather complex. This task is complex enough that most IDE's implement different source-editors as plugins that often re-implement the same features over and over but in a different way (netbeans). From what I can tell most IDE's and source editors re-implement parsers that use regular expressions, or some meta-syntax Naur Form to describe the languages grammer generically. These parsers are implemented over and over and over again. Question Has anyone attempted to unify or describe a set of features through an API and have a consistent interface to parsing various programming languages and dialects. I'm not describing an IDE - but a consistent API for any program to use to parse and obtain meta-information from the source code. I realize various programming languages offer many different features which are difficult to 'abstract' into a set of features, but I feel this would be a worthwhile venture. It seems to me that this could possibly allow the authors of interpreters to help maintain a central grammer intepreter for their language. the Python foundation could maintain the Python grammer api, ANSI the C grammer api, Oracle the Java grammer API, etc Example usage If this was API existed code documentation generators could theoretically work across all dialects and languages to some level. It wouldn't matter if your project used 5 different languages a single application could document all of them and the comments and doc-tags within. Has anyone attempted this comprehensively?

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • How do I compile a Wikipedia lens and install?

    - by user49523
    I read a tutorial about how to compile and install a Wikipedia lens, but it didn't work. The tutorial sounds easy - i just copied and pasted to the file that was suppose to edit. I have tried some times and here are 2 edits edit 1: import logging import optparse import gettext from gettext import gettext as _ gettext.textdomain('wikipedia') from singlet.lens import SingleScopeLens, IconViewCategory, ListViewCategory from wikipedia import wikipediaconfig import urllib2 import simplejson class WikipediaLens(SingleScopeLens): wiki = "http://en.wikipedia.org" def wikipedia_query(self,search): try: search = search.replace(" ", "|") url = ("%s/w/api.php?action=opensearch&limit=25&format=json&search=%s" % (self.wiki, search)) results = simplejson.loads(urllib2.urlopen(url).read()) print "Searching Wikipedia" return results[1] except (IOError, KeyError, urllib2.URLError, urllib2.HTTPError, simplejson.JSONDecodeError): print "Error : Unable to search Wikipedia" return [] class Meta: name = 'Wikipedia' description = 'Wikipedia Lens' search_hint = 'Search Wikipedia' icon = 'wikipedia.svg' search_on_blank=True # TODO: Add your categories articles_category = ListViewCategory("Articles", "dialog-information-symbolic") def search(self, search, results): for article in self.wikipedia_query(search): results.append("%s/wiki/%s" % (self.wiki, article), "http://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png", self.articles_category, "text/html", article, "Wikipedia Article", "%s/wiki/%s" % (self.wiki, article)) pass edit 2: import urllib2 import simplejson import logging import optparse import gettext from gettext import gettext as _ gettext.textdomain('wikipediaa') from singlet.lens import SingleScopeLens, IconViewCategory, ListViewCategory from wikipediaa import wikipediaaconfig class WikipediaaLens(SingleScopeLens): wiki = "http://en.wikipedia.org" def wikipedia_query(self,search): try: search = search.replace(" ", "|") url = ("%s/w/api.php?action=opensearch&limit=25&format=json&search=%s" % (self.wiki, search)) results = simplejson.loads(urllib2.urlopen(url).read()) print "Searching Wikipedia" return results[1] except (IOError, KeyError, urllib2.URLError, urllib2.HTTPError, simplejson.JSONDecodeError): print "Error : Unable to search Wikipedia" return [] def search(self, search, results): for article in self.wikipedia_query(search): results.append("%s/wiki/%s" % (self.wiki, article), "http://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png", self.articles_category, "text/html", article, "Wikipedia Article", "%s/wiki/%s" % (self.wiki, article)) pass class Meta: name = 'Wikipedia' description = 'Wikipedia Lens' search_hint = 'Search Wikipedia' icon = 'wikipedia.svg' search_on_blank=True # TODO: Add your categories articles_category = ListViewCategory("Articles", "dialog-information-symbolic") def search(self, search, results): # TODO: Add your search results results.append('https://wiki.ubuntu.com/Unity/Lenses/Singlet', 'ubuntu-logo', self.example_category, "text/html", 'Learn More', 'Find out how to write your Unity Lens', 'https://wiki.ubuntu.com/Unity/Lenses/Singlet') pass so .. what can i change in the edit ? (if anybody give me the entire edit file edited i will appreciate)

    Read the article

  • Highly SEO optimised forum posts

    - by Tom Gullen
    Given the following forum post: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much into it. So I know my way around programming etc... My question is, how does construct work internally? I know it allows python scripting, which itself is "technically" interpreted, though python is pretty fast as far as being interpreted goes. But what about the rest? Is the executable that gets cre... The forum software will take the first 150 chars of the first post as the page meta description, and the title will be the thread title. All ok. So in Google it will appear as: Basics of how internals of Construct work I've used GameMaker in the past. And I know some C++ and have used a few 3d engines with it. I have also looked at Unity, though I didn't get too much... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html Now the problem is (not so much with this thread, but other ones) is the first 150 chars don't always create the best meta description. Is it worth my time to cherry pick threads and manually set their description/title tags so they read like: Internal workings of Construct 2 Events aren't converted to any other language. The runtime is a standalone compiled EXE application, which is optimised and actually very fast. Your events... http://www.domain.com/forum/basics-of-how-internals-of-construct-work.html The H1 on the page is still the original title, but we have overridden the title and description to look more friendly on search results. Is this advantageous forgetting the obvious time cost?

    Read the article

  • Developing Ext JS Charts in NetBeans IDE

    - by Geertjan
    I took my first tentative steps into the world of Ext JS charts today, in NetBeans IDE 7.4. Click to enlarge the image. I will make a screencast soon showing how charts such as the above can be created with NetBeans IDE and Ext JS. Setting up Ext JS is easy in NetBeans IDE because there's a JavaScript library browser, by means of which I can browse for the Ext JS libraries that I need and then NetBeans IDE sets up the project for me. The JavaScript code shown above comes directly from here: http://www.quizzpot.com/courses/learning-ext-js-3/articles/chart-series The index.html is as follows: <html> <head> <title>TODO supply a title</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width"> <link rel="stylesheet" href="js/libs/extjs/resources/css/ext-all.css"/> <script src="js/libs/ext-core/ext-core.js"></script> <script src="js/libs/extjs/adapter/ext/ext-base-debug.js"></script> <script src="js/libs/extjs/ext-all-debug.js"></script> <script src="app.js"></script> </head> <body> </body> </html> More info on Ext JS: http://docs.sencha.com/extjs/4.1.3/ By the way, quite a few other articles are out there on Ext JS and NetBeans IDE, such as these, which I will be learning from during the coming days: http://netbeans.dzone.com/extjs-rest-netbeans http://netbeans.dzone.com/articles/create-your-first-extjs-4 http://netbeans.dzone.com/articles/mixing-extjs-json-p-and-java

    Read the article

  • Rankings dropping after small URL-change WITH 301-redirect

    - by David
    Two weeks ago, we attempted to make the URLs of ca. 12 pages more search-engine friendly. We changed three things. 1. Make URLs more SEF from: /????-????/brandname.html (meaning: /aircon-price/daikin.html to: /????-brandnameinenglish-brandnameinthai.html We set up 301-redirects from the old to the new URLs. You can find an example and the link to our page here: http://bit.ly/XRoTOK There are no direct external links to the old URLs. 2. Added text to img-links from homepage to brand-pages Before those changes, we only linked to those brands with a picture, so we added some text under the picture. You can see that here, in the left submenu: http://bit.ly/XRpfoF 3. Minor changes to Title, h1-Tags, Meta Description, etc. Only minor changes, to better match the on-site optimization with targeted keywords. For example, before we used full brand names, after we used what was really searched for: from: Mitsubishi Electric Mr. Slim to: ???? Mitsubishi (means: Aircon Mitsubishi) Three days after these changes, we noticed a heavy drop (80% loss in non-paid search traffic) in rankings and traffic for those pages, and also for all pages which are sub-categorized. Rankings for all keywords not affected by the changes stayed the same. Any ideas, what happened, and how we can regain our old rankings? What we already did, was submitting a new sitemap. Help much appreciated. Best regards, David

    Read the article

  • Designing Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { // Upload using some wrapper for an ORM an someInterface.Upload(meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • the OpenJDK group at Oracle is growing

    - by john.rose
    p.p1 {margin: 0.0px 0.0px 12.0px 0.0px; font: 12.0px Times} span.s1 {text-decoration: underline ; color: #0000ee} The OpenJDK software development team at Oracle is hiring. To get an idea of what we’re looking for, go to the Oracle recruitment portal and enter the Keywords “Java Platform Group” and the Location Keywords “Santa Clara”.  (We are a global engineering group based in Santa Clara.)  It’s pretty obvious what we are working on; just dive into a public OpenJDK repository or OpenJDK mailing list. Here is a typical job description from the current crop of requisitions: The Java Platform group is looking for an experienced, passionate and highly-motivated Software Engineer to join our world class development effort. Our team is responsible for delivering the Java Virtual Machine that is used by millions of developers. We are looking for a development engineer with a strong technical background and thorough understanding of the Java Virtual Machine, Java execution runtime, classloading, garbage collection, JIT compiler, serviceability and a desire to drive innovations. As a member of the software engineering division, you will take an active role in the definition and evolution of standard practices and procedures. You will be responsible for defining and developing software for tasks associated with the developing, designing and debugging of software applications or operating systems. Work is non-routine and very complex, involving the application of advanced technical/business skills in area of specialization. Leading contributor individually and as a team member, providing direction and mentoring to others. BS or MS degree or equivalent experience relevant to functional area. 7 years of software engineering or related experience.

    Read the article

  • Removing 301 redirect from site root

    - by Jon Clements
    I'm having a look at a friends website (a fairly old PHP based one) which they've been advised needs re-structuring. The key points being: URLs should be lower case and more "friendly". The root of the domain should be not be re-directed. The first point I'm happy with (and the URLs needed tidying up anyway) and have a draft plan of action, however the second is baffling me as to not only the best way to do it, but also whether it should be done. Currently http://www.example.com/ is redirected to http://www.example.com/some-link-with-keywords/ using the follow index.php in the root of the Apache2 instance. <?php $nextpage = "some-link-with-keywords/"; header( "HTTP/1.1 301 Moved Permanently" ); header( "Status: 301 Moved Permanently" ); header("Location: $nextpage"); exit(0); // This is Optional but suggested, to avoid any accidental output ?> As far as I'm aware, this has been the case for around three years -- and I'm sorely tempted to advise to not worry about it. It would appear taking off the 301 could: Potentially affect page ranking (as the 'homepage' would disappear - although it couldn't disappear because of the next point...) Introduce maintainance issues as existing users would still have the re-directed page in their cache Following the above, introduce duplicate content Confuse Google/other SE's as to what the homepage actually is now I may be over-analysing this but I have a feeling it's not as simple as removing the 301 from the root, and 301'ing the previous target to the root... Any suggestions (including it's not worth it) are sincerely appreciated.

    Read the article

  • 250 k 404 & 410 errors in Webmaster Tools. Bad backlinks?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these urls: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this url. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out urls. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these urls, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them? Any help will be higly appreciated,

    Read the article

  • What are the most commonly used and basic Apache htaccess redirects?

    - by bybe
    This question is here so we can offer users who are looking for information on how to make one or more common or basic redirects in Apache using the htaccess file. All future questions pertaining to finding information that is that is covered by the question should be closed as a duplication of this question. As per this Meta question. Whats the point in this question? The idea while not perfect is catch the most commonly asked questions regarding redirects using the htaccess on the Apache platform either on some type of lamp or a live server. The type of answers should be generally those that you could imagine are used by 100,000’s of sites world-wide and are constantly asked here at Pro Webmasters repeatedly over and over in various forms. A few examples of the type of answers we are looking for: How can I redirect non-www to www? How can I redirect a sub domain to the main domain? How can I redirect a sub folder from domain to a root or a subdomain? How can I redirect an old URL to a new URL? A few examples of the types of answers that we are not looking for: Answers that do not involve a redirect. Any answers relating to NGinx, IIS or any other non-Apache platform. Answers that involve custom and complex string or query removals. Resources for Advanced to Complex Mod_Rewrite Rules: Everything you ever wanted to know about mod rewrite rules but were afraid to ask Please note: that this question is still in construction and may need some refining either by myself or a real moderator of Pro Webmasters, if you have any concerns or questions please use the meta page I made a few days back here.

    Read the article

  • Google affecting my SERP Rank?

    - by Asad Moeen
    The following are some of my website's details. Home-page: [thebluewaffles].[com] Keywords: Blue Waffles- Rest of the keywords are post/subject specific. Site Description: Health Articles Blog Site Age: 1.5 years A short history: When I started my website, the few things in my mind when posting content were at-least 500 words on each page and writing of all the articles with to the point information. I didn't go really fast with it which is why I only have about 15 articles in 1.5 years. The SEO strategy was more simple. I shared links through Social Marketing websites and some Article Sharing websites after which I could see my website's rankings in top 5 SERP results. I ranked good enough for about 8 months continuously but didn't keep updating content due to which there were some 3 rough months when no content was posted due to some personal work. The SERPS dropped to 2nd page in April and almost started disappearing in May. I asked a lot of people about it and most came up with the reason of "no updates to site" so I started updating my site again since the day, November has almost started and I see no signs of my website's ranking. Another important point is that when I post a new article, and do a title search in Google, I see it ranks good enough for the first 10 hours and then disappears. What could be wrong here?

    Read the article

  • Are bad backlinks causing thousands of 404 and 410 errors in webmaster tools?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these URLs: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this URL. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out URLs. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these URLs, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them?

    Read the article

  • What is the the maximum time for a user to return to Google for the visit to be flagged up as a bounce in GA?

    - by Anonymous
    I know that Google measures bounce rates by how fast a user returns to the results page after clicking-through to a website. Roughly what is the maximum duration of the visit for the user to then return for it to be considered a bounce? i.e. <5 seconds, <30 seconds? I'm mainly interested as it appears a lot of users clicking through my PPC adverts (Adwords) are bouncing, despite my ads having a high quality score and the page's being entirely related to the adverts copy and at as best tied to what I think user's may be searching for from the key phrases I've selected so the high bounce rate (100% on some keywords) seems a bit strange. If a bounce isn't determined by time, but simply whether a user returns to the SERP after visiting my site or not after any amount of time that would make more sense but the average duration of visit for my keywords with a 100% bounce rate in GA is 00:00:00, which suggests a user immediately returned to the SERPs, which again, is odd. Is my GA data being skewed by https or anything like that? Scratching my head here.

    Read the article

  • How to overcome politics of the net (Google translate code refuses to work from a specific region) [closed]

    - by Jawad
    Possible Duplicate: How to overcome politics of the net (Google translate code refuses to work from a specific region) I have this Web Site. It uses the Google Translate API (Can't post the link, does not open from this region) with the following code. <meta name="google-translate-customization" content="9f841e7780177523-3214ceb76f765f38-gc38c6fe6f9d06436-c"></meta> <script type="text/javascript"> function googleTranslateElementInit() { new google.translate.TranslateElement({pageLanguage: 'en'}, 'google_translate_element'); } </script> <script type="text/javascript" src="http://translate.google.com/translate_a/element.js?cb=googleTranslateElementInit"></script> The problem is since this, it just stopped working. On the site you can see that I had to actually remove the above from here, here, and here while left it here, here, here and here. This is so because the the web site "refuses" to load at all with the pages that have the code (i.e., from this region.) If I use Firefox Stealthy Plugin and open the site in Firefox, It works like a charm without any problems. But with Google Chrome, Apple Safari and Opera Web browser, the site does not load/open at all because of the Google translate. (I know this because If I remove the Google Translate Code, the site works/loads fine) It was one thing to program for "cross browser compatability" and alltogether another to program for "cross region compatability". What can I do to make sure that the site works from anywhere? Do I completely remove the Google Translate code and just have to do without the additional functionality or Do I look for alternatives like this or according to this?

    Read the article

  • Your Most Popular Sites Screen in IE 10 - Icons not appearing

    - by MJWadmin
    We use the following code to add icons for favicon, tablets, smartphones, windows 8 tiles and the like:- <link rel="apple-touch-icon" href="apple-touch-icon.png"> <link rel="shortcut icon" type="image/x-icon" href="/favicon.ico"/> <link rel="apple-touch-icon-precomposed" sizes="144x144" href="apple-touch-icon-144x144-precomposed.png"> <link rel="apple-touch-icon-precomposed" sizes="114x114" href="apple-touch-icon-114x114-precomposed.png"> <link rel="apple-touch-icon-precomposed" sizes="72x72" href="apple-touch-icon-72x72-precomposed.png"> <link rel="apple-touch-icon-precomposed" href="apple-touch-icon-precomposed.png"> <meta name="msapplication-TileImage" content="apple-touch-icon-144x144-precomposed.png"/> <meta name="msapplication-TileColor" content="#17151a"/> Unfortunately this doesn't seem to work for IE9 and IE10's 'your most popular sites screen', google searches have been un-enlightening. Stack uses <link rel="apple-touch-icon" href="apple-touch-icon.png"> which seems to work for it, but not for us. Any clues to a solution appreciated.

    Read the article

  • Multilingual sites and Google search results, using sub-folders for language

    - by AWinter
    About three months ago we added an English version of our, previously Japanese only, site under the subfolder /en/ we've tried to follow the sometimes incomplete best practices laid out by Google by adding alternate tags to all pages that are currently translated. The top page for instance has the following meta tags for language. <link rel="canonical" href="/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> While the English main page under /en/ has <link rel="canonical" href="/en/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> Alternate languages are setup in the sitemap. (as per Google's recommendations) It seems however that Google absolutely refuses to show the English top page in results when the user is using English at google.com if you search you'll, as of this post, get the Japanese description and a title that Google has apparently invented instead of the title and description in the meta-tags for the /en/ index page. Does anyone have any experience with subfolders actually working to affect search results? What are the best practices for ensuring that the correct language version of my website is displayed through Google and other search engines? And how long will it take before the new language version becomes prominent in search engine results?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >