Search Results

Search found 8161 results on 327 pages for 'django queries'.

Page 121/327 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • How to localize an app on Google App Engine?

    - by Petri Pennanen
    What options are there for localizing an app on Google App Engine? How do you do it using Webapp, Django, web2py or [insert framework here]. 1. Readable URLs and entity key names Readable URLs are good for usability and search engine optimization (Stack Overflow is a good example on how to do it). On Google App Engine, key based queries are recommended for performance reasons. It follows that it is good practice to use the entity key name in the URL, so that the entity can be fetched from the datastore as quickly as possible. Currently I use the function below to create key names: import re import unicodedata def urlify(unicode_string): """Translates latin1 unicode strings to url friendly ASCII. Converts accented latin1 characters to their non-accented ASCII counterparts, converts to lowercase, converts spaces to hyphens and removes all characters that are not alphanumeric ASCII. Arguments unicode_string: Unicode encoded string. Returns String consisting of alphanumeric (ASCII) characters and hyphens. """ str = unicodedata.normalize('NFKD', unicode_string).encode('ASCII', 'ignore') str = re.sub('[^\w\s-]', '', str).strip().lower() return re.sub('[-\s]+', '-', str) This works fine for English and Swedish, however it will fail for non-western scripts and remove letters from some western ones (like Norwegian and Danish with their œ and ø). Can anyone suggest a method that works with more languages? 2. Translating templates Does Django internationalization and localization work on Google App Engine? Are there any extra steps that must be performed? Is it possible to use Django i18n and l10n for Django templates while using Webapp? The Jinja2 template language provides integration with Babel. How well does this work, in your experience? What options are avilable for your chosen template language? 3. Translated datastore content When serving content from (or storing it to) the datastore: Is there a better way than getting the *accept_language* parameter from the HTTP request and matching this with a language property that you have set with each entity?

    Read the article

  • general learning methodology

    - by momo
    just wanted to hear on the different general learning paths people embark on when learning a new language/framework. the one i currently use, which is how i learned bash and am currently learning python, is: instant hacking tutorial (very short tutorial introducing the basic syntax, variable declaration, loops, data types, etc. and how they are generally used) in depth tutorial with good programming style and slightly topic-specific (e.g. Mark Pilgrim's Dive into Python), important topics for me personally are regex methods, file IO, and ways the different data types are utilized best (i wrote a very primitive bayesian spam filter using python's dictionaries to keep track of word occurrences) spaced-repition of syntax or short recipes (i use anki, with questions like 'create dictionary with filename and filesize metadata, human-readable' or simpler ones like 'match 0 - 3 occurences of the letter M in a string', or 'return/create an iterator from two sequences') the use of spaced-repitition has been invaluable, and i credit it with the ease that i can recall/create python algorithms. however, i've recently started looking into django, and i've found that spaced-repitition, at least in my case, doesn't work very well for learning a framework, it works best with short code recipes (either that or i should start looking into more basic django framework tutorials). the problem i'm encountering is that since framework programming is not only algorithms, but actually learning the API, which can be quite complex since you have to learn all the methods, modules, the places where they are stored, and the sequence of which things have to be done. for ex. in django to start a project that deals with polls (from the django tutorial), one has to create the project, edit the settings.py file, create the polls app, edit the models.py file (which requires knowing the classes that are present in the module models), edit the urls.py file, etc. i found that my spaced-repition method didn't work very well for this type of learning, so i wanted to ask you guys what method(s) you use for learning the different frameworks/APIs.

    Read the article

  • How can i use a commandlinetool (ie. sox) via subprocess.Popen with mod_wsgi?

    - by marue
    I have a custom django filefield that makes use of sox, a commandline audiotool. This works pretty well as long as i use the django development server. But as soon as i switch to the production server, using apache2 and mod_wsgi, mod_wsgi catches every output to stdout. This makes it impossible to use the commandline tool to evaluate the file, for example use it to check if the uploaded file really is an audio file like this: filetype=subprocess.Popen([sox,'--i','-t','%s'%self.path], shell=False,\ stdout=subprocess.PIPE, stderr=subprocess.PIPE) (filetype,error)=filetype.communicate() if error: raise EnvironmentError((1,'AudioFile error while determining audioformat: %s'%error)) Is there a way to workaround for this? edit the error i get is "missing filename". I am using mod_wsgi 2.5, standard with ubuntu 8.04. edit2 What exactly happens, when i call subprocess.Popen from within django in mod_wsgi? Shouldn't subprocess stdin/stdout be independent from django stdin/stdout? In that case mod_wsgi should not affect programms called via subprocess... I'm really confused right now, because the file i am trying to access is a temporary file, created via a filenamevariable that i pass to the file creation and the subprocess command. That file is being written to /tmp, where the rights are 777, so it can't be a rights issue. And the error message is not "file does not exist", but "missing filename", which suggests i am not passing a filename as parameter to the commandlinetool.

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • One to many too much data returned - MySQL

    - by Evan McPeters
    I have 2 related MySQL tables in a one to many relationship. Customers: cust_id, cust_name, cust_notes Orders: order_id, cust_id, order_comments So, if I do a standard join to get all customers and their orders via PHP, I return something like: Jack Black, jack's notes, comments about jack's 1st order Jack Black, jack's notes, comments about jack's 2nd order Simon Smith, simon's notes, comments about simon's 1st order Simon Smith, simon's notes, comments about simon's 2nd order The problem is that *cust_notes* is a text field and can be quite large (a couple of thousand words). So, it seems like returning that field for every order is inneficient. I could use *GROUP_CONCAT* and JOINS to return all *order_comments* on a single row BUT order_comments is a large text field too, so it seems like that could create a problem. Should I just use two separate queries, one for the customers table and one for the orders table? Is there a better way?

    Read the article

  • SQL Server - Query Short-Circuiting?

    - by Sam Schutte
    Do T-SQL queries in SQL Server support short-circuiting? For instance, I have a situation where I have two database and I'm comparing data between the two tables to match and copy some info across. In one table, the "ID" field will always have leading zeros (such as "000000001234"), and in the other table, the ID field may or may not have leading zeros (might be "000000001234" or "1234"). So my query to match the two is something like: select * from table1 where table1.ID LIKE '%1234' To speed things up, I'm thinking of adding an OR before the like that just says: table1.ID = table2.ID to handle the case where both ID's have the padded zeros and are equal. Will doing so speed up the query by matching items on the "=" and not evaluating the LIKE for every single row (will it short circuit and skip the LIKE)?

    Read the article

  • Need help with SQL query on SQL Server 2005

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A.

    Read the article

  • need help with db-query on sql-server 2005.

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A. Thanks in advance,

    Read the article

  • Microsoft Access vs Native SQL

    - by ktm5124
    Hypothetical: Let's say you are writing complex queries to a database and it is very important that the data you extracted is the correct result set (e.g., that you didn't mess up a JOIN by not using all the correct keys, and all the other things that can go wrong, et cetera). What would you rather use to do this? Would you write the query using Microsoft Access and its Design View, or would you write it in native SQL using a SQL IDE? What is the better professional choice? Thanks in advance your feedback!

    Read the article

  • SQLite issues, escaping certain characters...

    - by CODe
    I'm working on my first database application. It is a WinForms application written in C# using a SQLite database. I've come across some problems, when a apostrophe is used, my SQLite query fails. Here is the structure of my queries. string SQL = "UPDATE SUBCONTRACTOR SET JobSite = NULL WHERE JobSite = '" + jobSite + "'"; For instance, if an apostrophe is used in the jobSite var, it offsets the other apostrophes in the command, and fails. So my questions are: 1. How do I escape characters like the apostrophe and semicolon in the above query example? 2. What characters do I need to escape? I know I should escape the apostrophe, what else is dangerous? Thanks for your help!

    Read the article

  • How to import classes into other classes within the same file in Python

    - by Chris
    I have the file below and it is part of a django project called projectmanager, this file is projectmanager/projects/models.py . Whenever I use the python interpreter to import a Project just to test the functionality i get a name error for line 8 that FileRepo() cannot be found. How Can I import these classes correctly? Ideally what I am looking for is each Project to contain multiple FileRepos which each contain and unknown number of files. Thanks for any assistance in advance. #imports from django.db import models from django.contrib import admin #Project is responsible for ensuring that each project contains all of the folders and file storage #mechanisms a project needs, as well as a unique CCL# class Project(models.Model): ccl = models.CharField(max_length=30) Techpacks = FileRepo() COAS = FileRepo() Shippingdocs = FileRepo() POchemspecs = FileRepo() Internalpos = FileRepo() Finalreports = FileRepo() Batchrecords = FileRepo() RFPS = FileRepo() Businessdev = FileRepo() QA = FileRepo() Updates = FileRepo() def __unicode__(self): return self.ccl #ProjectFile is the file object used by each FileRepo component class ProjectFile(models.Model): file = models.FileField(uploadto='ProjectFiles') def __unicode__(self): return self.file #FileRepo is the model for the "folders" to be used in a Project class FileRepo(models.Model): typeOf = models.CharField(max_length=30) files = models.ManyToManyField(ProjectFile) def __unicode__(self): return self.typeOf

    Read the article

  • FCGI htaccess handler

    - by sharvey
    I'm trying to setup django on a shared hosting provider. I followed the instructions on http://helpdesk.bluehost.com/index.php/kb/article/000531 and almost have it working. The problem I'm facing now is that the traffic is properly routed throught the fcgi file, but the file itself shows up as plain text in the browser. If I run ./mysite.fcgi in the ssh shell, I do get the default django welcome page. my .htaccess is: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] and mysite.fcgi: #!/usr/bin/python2.6 import sys, os os.environ['DJANGO_SETTINGS_MODULE'] = "icm.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") thanks.

    Read the article

  • Is it bad practice to extend the MongoEngine User document?

    - by Soviut
    I'm integrating MongoDB using MongoEngine. It provides auth and session support that a standard pymongo setup would lack. In regular django auth, it's considered bad practice to extend the User model since there's no guarantee it will be used correctly everywhere. Is this the case with mongoengine.django.auth? If it is considered bad practice, what is the best way to attach a separate user profile? Django has mechanisms for specifying an AUTH_PROFILE_MODULE. Is this supported in MongoEngine as well, or should I be manually doing the lookup?

    Read the article

  • Decorator for determining HTTP response from a view

    - by polera
    I want to create a decorator that will allow me to return a raw or "string" representation of a view if a GET parameter "raw" equals "1". The concept works, but I'm stuck on how to pass context to my renderer. Here's what I have so far: from django.shortcuts import render_to_response from django.http import HttpResponse from django.template.loader import render_to_string def raw_response(template): def wrap(view): def response(request,*args,**kwargs): if request.method == "GET": try: if request.GET['raw'] == "1": render = HttpResponse(render_to_string(template,{}),content_type="text/plain") return render except Exception: render = render_to_response(template,{}) return render return response return wrap Currently, the {} is there just as a place holder. Ultimately, I'd like to be able to pass a dict like this: @raw_response('my_template_name.html') def view_name(request): render({"x":42}) Any assistance is appreciated.

    Read the article

  • Encoding gives "'ascii' codec can't encode character … ordinal not in range(128)"

    - by user140314
    I am working through the Django RSS reader project here. The RSS feed will read something like "OKLAHOMA CITY (AP) — James Harden let". The RSS feed's encoding reads encoding="UTF-8" so I believe I am passing utf-8 to markdown in the code snippet below. The em dash is where it chokes. I get the Django error of "'ascii' codec can't encode character u'\u2014' in position 109: ordinal not in range(128)" which is an UnicodeEncodeError. In the variables being passed I see "OKLAHOMA CITY (AP) \u2014 James Harden". The code line that is not working is: content = content.encode(parsed_feed.encoding, "xmlcharrefreplace") I am using markdown 2.0, django 1.1, and python 2.4. What is the magic sequence of encoding and decoding that I need to do to make this work? Thanks.

    Read the article

  • How can I tell Phusion Passenger which python to use?

    - by Mike
    I'm using Phusion Passenger with a ruby app and I'd also like to set it up to work with an django appengine app I'm working on. Googling for "passenger_wsgi.py" I was able to get the following very simple non-django app working on passenger: passenger_wsgi.py: def application(environ, start_response): response_headers = [('Content-type','text/plain')] start_response('200 OK', response_headers) return ['Hello World!\n'] However, if I add the line import django.core.handlers.wsgi into the mix, I get 'An error occurred importing your passenger_wsgi.py'. By printing out the sys.path I've discovered that at least part of the reason is because Passenger is using the wrong python installation on my machine. How can I configure Passenger (on apache) to use /opt/local/bin/python2.5 instead of the system default python?

    Read the article

  • How to pass json via a form element

    - by becomingGuru
    I have this swf (flash) file that provides the json that needs to be sent to the server. I wrote a very simple jQuery: function submitForm(swf_json) { $('#swfjson').val(swf_json); #swfjson is an input of type hidden $('#titleForm').submit(); } and the swf will call the submitForm above and I receive the request.POST in django as usual. But, django is interpreting the swf_json as a string "Object object" >>>type(request.POST['swfjson']) <type 'unicode'> Of course I can pass the json as a string to the view function. Doesn't seem good to me. Any other way of passing the json object to the django view?

    Read the article

  • How to make Dajax callback into scoped object

    - by BozoJoe
    I cant seem to find a way to make django-dajaxice have its callback inside same scoped object from which made the initial call. MyViewport = Ext.extend(MyViewportUi, { initComponent: function() { MyViewport.superclass.initComponent.call(this); }, LoadRecordsCallback: function(data){ if(data!='DAJAXICE_EXCEPTION') { alert(data); } else { alert('DAJAXICE_EXCEPTION'); } }, LoadRecords: function(){ Dajaxice.Console.GetUserRecords(this.LoadRecordsCallback); } }); var blah = new MyViewport(); blah.LoadRecords(); I'm on django, and like the calling syntax to django-dajaxice. I'm using Extjs 3.2 and tried passing a Ext.createCallback but Dajax's returning eval seems to only want a string for the callback.

    Read the article

  • Error 1005 when adding a foreign key constraint on mysql table

    - by luc
    Hello, I have a problem when upgrading a django and mysql app with south. I've tried to make a sql-based upgrade with the code generated by the django sqlall command and I have a similar problem. Here is the sql code: CREATE TABLE `programmations_basissupport` ( `id` integer AUTO_INCREMENT NOT NULL PRIMARY KEY, `value` numeric(6, 0) NOT NULL ) ALTER TABLE `programmations_concert` ADD `basis_support_id` integer AFTER program_status_id; ALTER TABLE `programmations_concert` ADD CONSTRAINT `basis_support_id_refs_id_1e4ed8d7` FOREIGN KEY (`basis_support_id`) REFERENCES `programmations_basissupport` (`id`); An error is raised when adding the FK constraint: ERROR 1005 (HY000): Can't create table 'apidev_mnl.#sql-106e_632b00a' (errno: 150) Does anybody have an idea? Update: DEFAULT values where missing but even if I add the default='' in the django model, the creation of foreign keys fails. Thanks for your help

    Read the article

  • Aptana Studio is opening but not ever closing a python.exe process

    - by SC Ghost
    I am developing a small testing website using Django 1.2 in Aptana Studio build 2.0.4.1268158907. I have a Django project that I test by running the command "runserver 8001" on my project. This command runs the project on a small server that comes with Django. However the problem arises that every time I run this command Aptana opens two instances of the process "python.exe". Upon terminating the command only one of these instances is ended. The other process continues to run and use memory. My server is not online, and the process doesn't seem to do anything that I can find. This happens every time i run the runserver command on my project and therefore more and more python.exe instances will open up through my development period. Any help discovering either the purpose of this extra python.exe or a way to prevent it from opening would be much appreciated.

    Read the article

  • My server is slower than the average user's computer, should I still offload Access queries to SQL Server? [closed]

    - by andrewb
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I have a database set up with MS Access 2007 front ends and an SQL Server 2005 back end. At the moment, all the queries are saved in the front end as I've only recently moved to an SQL Server backend. I'm wondering how much of those queries I should save as stored procedures/views on SQL Server. About the system The number of concurrent users is only a handful, though it could be as high as 25 at one time (very unlikely). The average computer has an Intel i3-2120 CPU running at 3.3 GHz, which gets a PassMark score of 3,987, whilst the server has an Intel Xeon E5335 running at 2.0 GHz, which gets a PassMark score of 2,637. Always an awkward situation when an i3 outperforms a Xeon... though the i3 is from Q1 2011 and the Xeon is Q2 2009. There is potential for a server upgrade in the future, though it wouldn't come easy. I'm inclined to move the queries to the back end, as they are beginning to take noticeable time and I figure that is a better way of doing things. I like the idea of throwing everything at the server, then pushing for a server upgrade. It makes more sense in my mind to be upgrading one server rather than 30 PCs. Or am I being overzealous? Why my question isn't a duplicate It seems that my question has been misinterpreted and labelled a duplicate of quite a different question, one about testing and capacity planning. I'll try explain how my question is very different from the linked question. The crux of my question is something like "Even though my server is technically slower, is it better to have it doing more of the queries?" There's two ways that people could have answered this: I agree the server is going to be slower, but the extra benefits of such and such (like the less Access the better) means you should move most to the server anyway. (OR no it doesn't outweigh the benefit, keep them in Access) Actually the server will be faster because of such and such. I'm hoping that people out there could provide some answers like this, and the question in the dupe link doesn't really provide either of these answers. Ok sure, I suppose I could do extensive performance testing to compare Access queries running on a local machine to SQL Server queries running on the server, but that sounds like a very hard task (particularly performance testing of access) compared to someone giving some quick general guidance, and again, my question is looking for a lot more than immediate performance benefit.

    Read the article

  • NT4 server generate too much weird DNS queries, How can i see the source PID?

    - by Hanan N.
    I have a NT4 server that in the last two weeks started to generate too many weird DNS queries to the DNS server is set to use. I have got warnings from the IPS system that it has blocked the responses from the DNS server back to the NT4 server. The queries it generate doesn't relate to any computer in the network, it is like 120624100088.xxxxxxx.net where xxx is the internal network, the numbers are just random at each query. I have done some research on how to get the PID that is generating the queries, and i found that only Process Monitor could give me that information, but since it is NT4 system Process Monitor doesn't work on it. It is a production server and i am just can't stop services as i want. I would like to get your advice on how can i get the PID that is generating these queries? Thanks.

    Read the article

  • How to combine RewriteRule of index.php and queries rewrite and avoid Server Error 404?

    - by Binyamin
    Both RewriteRule's works fine, except when used together. 1.Remove all queries except query ?callback=.*: # /api?callback=foo has no rewrite # /whatever?whatever=foo has 301 redirect /whatever RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] 2.Rewrite index.php queries api and url=$1: # /api returns data index.php?api&url= # /api/whatever returns data index.php?api&url=whatever RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L] Any valid combination to this RewriteRule's on keeping its functionality? This combination will return Server Error 404 to /api/?callback=foo: # Remove all queries except query "callback" RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /([^?#\ ]*)\?[^\ ]*\ HTTP/ [NC] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule .*$ %{REQUEST_URI}? [R=301,L] # Rewrite index.php queries RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* # Server Error 404 on /api/?callback=foo and /api/whatever?callback=foo RewriteRule ^api(?:/([^/]*))?$ index.php?api&url=$1 [QSA,L] RewriteCond %{REQUEST_URI}?%{QUERY_STRING} !/api(/.*)?\?callback=.* RewriteRule ^([^.]*)$ index.php?url=$1 [QSA,L]

    Read the article

  • PostgreSQL: Full Text Search - How to search partial words ?

    - by Anthoni Gardner
    Hello, Following a question posted here about how I can increase the speed on one of my SQL Search methods, I was advised to update my table to make use of Full Text Search. This is what I have now done, using Gist indexes to make searching faster. On some of the "plain" queries I have noticed a marked increase which I am very happy about. However, I am having difficulty in searching for partial words. For example I have several records that contain the word Squire (454) and I have several records that contain Squirrel (173). Now if I search for Squire it only returns the 454 records but I also want it to return the Squirrel records as well. My query looks like this SELECT title FROM movies WHERE vectors @@ to_tsoquery('squire'); I thought I could do to_tsquery('squire%') but that does not work. How do I get it to search for partial matches ? Also, in my database I have records that are movies and others that are just TV Shows. These are differentiated by the "" over the name, so like "Munsters" is a TV Show, whereas The Munsters is the film of the show. What I want to be able to do is search for just the TV Show AND just the movies. Any idea on how I can achieve this ? Regards Anthoni

    Read the article

  • Semantic Grid System, Media Query issue

    - by Andy
    I'm using the Semantic Grid System to build a responsive site. However, something isn't quite right with the media queries that should obviously kick in once it hits a particular screen size. I'll reference what i mean with their example on the website : if I view this on my iPhone for example, given that it is supposed to adjust to a single column structure on a mobile device, it still throws out the web version of the page. That is true for both Safari and Chrome on my iPhone. However, if I use the RWD bookmarklet to check it's appearance at different resolutions it appears as expected for the mobile resolution. Also, ironically, if I resize the page in Safari on my desktop it also adjusts accordingly once I get down to the approriate screen size, but not in Firefox. The media query that it uses once it hits 720px is @media screen and (max-width: 720px) { #maincolumn, #sidebar { .column(12); margin-bottom: 1em; } } and I might be wide of the mark here but I think that must be the issue. But given that this is directly from the semantic.gs website I'm not inclined to question their own code. Any idea what the problem might be?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >