Search Results

Search found 22358 results on 895 pages for 'django raw query'.

Page 65/895 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • MySQL limit from descending order

    - by faya
    Hello, Is it available to write a query to use same "LIMIT (from), (count)", but get result in backwards? In example if I have 8 rows in the table and I want to get 5 rows in two steps I would: first step query: select * from table limit 0, 5 first step result: first 5 rows; second step query: select * from table limit 5, 5 second step result: last 3 rows; But I want to get it vice versa. I mean from the first step I want last 3 rows and from the second I want 5 first rows. Thank you for your answer

    Read the article

  • mysql inserts & updates optimized

    - by user271619
    This is an optimization question, mostly. I have many forms on my sites that do simple Inserts and Updates. (Nothing complicated) But, several of the form's input fields are not necessary and may be left empty. (again, nothing complicated) However, my SQL query will have all columns in the Statement. My question, is it best to optimize the Inserts/Update queries appropriately? And only apply the columns that are changed into the query? We all hear that we shouldn't use the "SELECT *" query, unless it's absolutely needed for displaying all columns. But what about Inserts & Updates? Hope this makes sense. I'm sure any amount of optimization is acceptable. But I never really hear about this, specifically, from anyone.

    Read the article

  • Should I have a separate method for Update(), Insert(), etc., or have a generic Query() that would be able to handle all of these?

    - by Prayos
    I'm currently trying to write a class library for a connection to a database. Looking over it, there are several different types of queries: Select From, Update, Insert, etc. My question is, what is the best practice for writing these queries in a C# application? Should I have a separate method for each of them(i.e. Update(), Insert()), or have a generic Query() that would be able to handle all of these? Thanks for any and all help!

    Read the article

  • Django rewrites URL as IP address in browser - why?

    - by Mitch
    I am using django, nginx and apache. When I access my site with a URL (e.g., http://www.foo.com/) what appears in my browser address is the IP address with admin appended (e.g., http://123.45.67.890/admin/). When I access the site by IP, it is redirected as expected by django's urls.py (e.g., http://123.45.67.890/ - http://123.45.67.890/accounts/login/?next=/) I would like to have the name URL act the same way as the IP. That is, if the URL goes to a new view, the host in the browser address should remain the same and not change to the IP address. Where should I be looking to fix this? My files: ; cpa.com (apache) NameVirtualHost *:8080 <VirtualHost *:8080> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript application/javascript application/x-javascript BrowserMatch ^Mozilla/4 gzip-only-text/html BrowserMatch ^Mozilla/4\.0[678] no-gzip BrowserMatch \bMSIE !no-gzip !gzip-only-text/htm DocumentRoot /path/to/root ServerName www.foo.com <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 </IfModule> <Directory /public/static> AllowOverride None AddHandler mod_python .py PythonHandler mod_python.publisher </Directory> Alias / /dj <Location /> SetHandler python-program PythonPath "['/usr/lib/python2.5/site-packages/django', '/usr/lib/python2.5/site-packages/django/forms'] + sys.path" PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE dj.settings PythonDebug On </Location> </VirtualHost> ; ; ports.conf (apache) Listen 127.0.0.1:8080 ; ; cpa.conf (nginx) server { listen 80; server_name www.foo.com; location /static { root /var/public; index index.html; } location /cpa/js { root /var/public/js; } location /cpa/css { root /var/public/css; } location /djmedia { alias "/usr/lib/python2.5/site-packages/django/contrib/admin/media/"; } location / { include /etc/nginx/proxy.conf; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } ; ; proxy.conf (nginx) proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 500; proxy_buffers 32 4k;

    Read the article

  • File uploads and client_max_body_size in nginx + gunicorn + django

    - by carlosescri
    I need to configure nginx + gunicorn to be able to upload files greater than the default max size in both servers. My nginx .conf file looks like this: server { # ... location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 60; proxy_pass http://localhost:8000/; } } The idea is to allow requests of 20M for two locations: /admin/path/to/upload?param=value /installer/other/path/to/upload?param=value I've tried to add location directives at the same level than the one I've pasted here (getting 404 errors) and also tried to add them inside the location / directive (getting 413 Entity Too Large errors). My location directives look like these in their simplest form: location /admin/path/to/upload/ { client_max_body_size 20M; } location /installer/other/path/to/upload/ { client_max_body_size 20M; } But they don't work (actually I tested lots of combinations and I'm desperate thinking about this. Please, help If you can: What settings do I need to set to make this work? Thank you so much!

    Read the article

  • permission errors with python/django

    - by tipu
    Error can be seen here: http://djaffry.selfip.com:8080/ If i go to the folder /srv/twingle/search and do ls -l I get -rwxrwxrwx 1 root root 65142784 May 26 20:28 words.db I gave it 777 access (absolutely unsafe, I know, but I thought it would atleast work) any idea what can be the permissions problem? Edit: A very strange problem is that the code doesn't crash once every few refreshes.. then goes back to crashing

    Read the article

  • Can I make a "TCP packet modifier" using tun/tap and raw sockets?

    - by benhoyt
    I have a Linux application that talks TCP, and to help with analysis and statistics, I'd like to modify the data in some of the TCP packets that it sends out. I'd prefer to do this without hacking the Linux TCP stack. The idea I have so far is to make a bridge which acts as a "TCP packet modifier". My idea is to connect to the application via a tun/tap device on one side of the bridge, and to the network card via raw sockets on the other side of the bridge. My concern is that when you open a raw socket it still sends packets up to Linux's TCP stack, and so I couldn't modify them and send them on even if I wanted to. Is this correct? A pseudo-C-code sketch of the bridge looks like: tap_fd = open_tap_device("/dev/net/tun"); raw_fd = open_raw_socket(); for (;;) { select(fds = [tap_fd, raw_fd]); if (FD_ISSET(tap_fd, &fds)) { read_packet(tap_fd); modify_packet_if_needed(); write_packet(raw_fd); } if (FD_ISSET(raw_fd, &fds)) { read_packet(raw_fd); modify_packet_if_needed(); write_packet(tap_fd); } } Does this look possible, or are there other better ways of achieving the same thing? (TCP packet bridging and modification.)

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014

    - by Pinal Dave
    SQL Server 2014 has new cardinality estimation logic/algorithm. The cardinality estimation logic is responsible for quality of query plans and majorly responsible for improving performance for any query. This logic was not updated for quite a while, but in the latest version of SQL Server 2104 this logic is re-designed. The new logic now incorporates various assumptions and algorithms of OLTP and warehousing workload. Cardinality estimates are a prediction of the number of rows in the query result. The query optimizer uses these estimates to choose a plan for executing the query. The quality of the query plan has a direct impact on improving query performance. ~ Souce MSDN Let us see a quick example of how cardinality improves performance for a query. I will be using the AdventureWorks database for my example. Before we start with this demonstration, remember that even though you have SQL Server 2014 to see the effect of new cardinality estimates, you will need your database compatibility mode set to 120 which is for SQL Server 2014. If your server instance of SQL Server 2014 but you have set up your database compatibility mode to 110 or any other earlier version, you will get performance from your query like older version of SQL Server. Now we will execute following query in two different compatibility mode and see its performance. (Note that my SQL Server instance is of version 2014). USE AdventureWorks2014 GO -- ------------------------------- -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO -- ------------------------------- -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO EXEC [dbo].[uspGetManagerEmployees] 44 GO Result of Statistics IO Compatibility level 120 Table ‘Person’. Scan count 0, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Compatibility level 110 Table ‘Worktable’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Person’. Scan count 0, logical reads 137, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Employee’. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. You will notice in the case of compatibility level 110 there 137 logical read from table person where as in the case of compatibility level 120 there are only 6 physical reads from table person. This drastically improves the performance of the query. If we enable execution plan, we can see the same as well. I hope you will find this quick example helpful. You can read more about this in my latest Pluralsight Course. Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Converting raw data type to enumerated type

    - by Jim Lahman
    There are times when an enumerated type is preferred over using the raw data type.  An example of using a scheme is when we need to check the health of x-ray gauges in use on a production line.  Rather than using a scheme like 0, 1 and 2, we can use an enumerated type: 1: /// <summary> 2: /// POR Healthy status indicator 3: /// </summary> 4: /// <remarks>The healthy status is for each POR x-ray gauge; each has its own status.</remarks> 5: [Flags] 6: public enum POR_HEALTH : short 7: { 8: /// <summary> 9: /// POR1 healthy status indicator 10: /// </summary> 11: POR1 = 0, 12: /// <summary> 13: /// POR2 healthy status indicator 14: /// </summary> 15: POR2 = 1, 16: /// <summary> 17: /// Both POR1 and POR2 healthy status indicator 18: /// </summary> 19: BOTH = 2 20: } By using the [Flags] attribute, we are treating the enumerated type as a bit mask.  We can then use bitwise operations such as AND, OR, NOT etc. . Now, when we want to check the health of a specific gauge, we would rather use the name of the gauge than the numeric identity; it makes for better reading and programming practice. To translate the numeric identity to the enumerated value, we use the Parse method of Enum class: POR_HEALTH GaugeHealth = (POR_HEALTH) Enum.Parse(typeof(POR_HEALTH), XrayMsg.Gauge_ID.ToString()); The Parse method creates an instance of the enumerated type.  Now, we can use the name of the gauge rather than the numeric identity: 1: if (GaugeHealth == POR_HEALTH.POR1 || GaugeHealth == POR_HEALTH.BOTH) 2: { 3: XrayHealthyTag.Name = Properties.Settings.Default.POR1XRayHealthyTag; 4: } 5: else if (GaugeHealth == POR_HEALTH.POR2) 6: { 7: XrayHealthyTag.Name = Properties.Settings.Default.POR2XRayHealthyTag; 8: }

    Read the article

  • Is windows a "second class citizen" in the django community?

    - by Daniel Upton
    I'm currently doing R&D for a web application which we plan to host ourselves initially and then allow customers to self host. My task has been evaluating web frameworks to see which would give us the biggest productivity initially and ease of maintence while also allowing us to easily support deployment to customer controlled environments. Our team has experience with ASP.NET (MVC and Webforms) and Ruby on Rails. Our experience with rails is that windows deployment is a very taboo subject and any questions on IRC or SO are met with knee jerk "why not linux" responses.. However in this case our target market may be running windows or linux servers. Is this also the case in django land? Is it possible with rubbish performance? Is it possible with lost of pain? Is it seen as reasonable and not treated as a completely stupid idea for not wanting to run linux?

    Read the article

  • Django templates crashes with no sense

    - by user233323
    Hello I'm trying to use google visualization API along with django templates system. I got an error that don't know how to fix. The error is the following: invalid_block_tag raise self.error(token, "Invalid block tag: '%s'" % command) django.template.TemplateSyntaxError: Invalid block tag: 'endfor' The code is: function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('date', 'time'); data.addColumn('number', 'x'); data.addColumn('number', 'y'); data.addColumn('number', 'z'); data.addRows([ {% for d in datos &} [new Date({{d.instante|date:"Y, m, d, H, i, s"}}), {{d.x}}, {{d.y}}, {{d.z}}] {% if not forloop.last %},{% endif %} ]); {% endfor %} var chart = new google.visualization.AnnotatedTimeLine(document.getElementById('chart_div')); chart.draw(data, {displayAnnotations: true}); } Thanks you all!

    Read the article

  • Best practice for Python & Django constants

    - by Dylan Klomparens
    I have a Django model that relies on a tuple. I'm wondering what the best practice is for refering to constants within that tuple for my Django program. Here, for example, I'd like to specify "default=0" as something that is more readable and does not require commenting. Any suggestions? Status = ( (-1, 'Cancelled'), (0, 'Requires attention'), (1, 'Work in progress'), (2, 'Complete'), ) class Task(models.Model): status = models.IntegerField(choices=Status, default=0) # Status is 'Requires attention' (0) by default. EDIT: If possible I'd like to avoid using a number altogether. Somehow using the string 'Requires attention' instead would be more readable.

    Read the article

  • Django deployment: PIL and virtualenv problem

    - by AndriJan
    Hey guys, I'm deploying a Django site on my Vserver (Debian Lenny) and I'm having problem with PIL. I'm using virtualenv as well. When I'm in the virtualenv and type pip install -U PIL everything installs fine and I get this: *** TKINTER support not available --- JPEG support available --- ZLIB (PNG/ZIP) support available *** FREETYPE2 support not available *** LITTLECMS support not available And when I go in to the shell (python manage.py shell) and type from PIL import Image I get no error. But when I use it in the Django project (uploading an image in the admin for example) I just get No module named PIL I don't think it's a problem with the model because it works fine on the development machine but here is part of the class: class Category(models.Model): name = models.CharField(max_length=255, verbose_name="Name") logo = models.ImageField(upload_to='images/category/', blank=True, null=True, verbose_name="Logo") I'm going out of my mind about this. I feel like this is a very common issue but I've been trying to google this all day with no luck. Thanks in advance, AndriJan

    Read the article

  • Django auth without "auth_*" tables

    - by Travis Jensen
    We would like to use our own tables for user management instead of the Django "auth" tables. We already have database tables that include all of the relevant information our application needs but it isn't in the Django format. We would prefer not to have the information duplicated in two tables. We would like to utilize the auth package, though, as there is some very nice functionality that we don't want to replicate. I realize we could build our own auth backend, but that doesn't, as far as I can tell, remove the need for two sets of tables in this case. Am I correct in assuming that we cannot do this? I have found no docs that discuss how to modify the underlying model that the auth package is using. The backend simply pre-populates the user object that would eventually be saved in the auth tables. Thanks!

    Read the article

  • async handler deleted by the wrong thread in django

    - by user3480706
    I'm run this algorithm in my django application.when i run several time from my GUI django local server will stopped and i got this error Exception RuntimeError: RuntimeError('main thread is not in main loop',) in ignored Tcl_AsyncDelete: async handler deleted by the wrong thread Aborted (core dumped) code print "Learning the sin function" network =MLP.MLP(2,10,1) samples = np.zeros(2000, dtype=[('x', float, 1), ('y', float, 1)]) samples['x'] = np.linspace(-5,5,2000) samples['y'] = np.sin(samples['x']) #samples['y'] = np.linspace(-4,4,2500) for i in range(100000): n = np.random.randint(samples.size) network.propagate_forward(samples['x'][n]) network.propagate_backward(samples['y'][n]) plt.figure(figsize=(10,5)) # Draw real function x = samples['x'] y = samples['y'] #x=np.linspace(-6.0,7.0,50) plt.plot(x,y,color='b',lw=1) samples1 = np.zeros(2000, dtype=[('x1', float, 1), ('y1', float, 1)]) samples1['x1'] = np.linspace(-4,4,2000) samples1['y1'] = np.sin(samples1['x1']) # Draw network approximated function for i in range(samples1.size): samples1['y1'][i] = network.propagate_forward(samples1['x1'][i]) plt.plot(samples1['x1'],samples1['y1'],color='r',lw=3) plt.axis([-2,2,-2,2]) plt.show() plt.close() return HttpResponseRedirect('/charts/charts') how can i fix this error ?need a quick help

    Read the article

  • Apache/Django subdomains problem

    - by Thomas
    Now I have apache configuration which works only with localhost domain (http://localhost/). Alias /media/ "/sciezka/do/instalacji/django/contrib/admin/media/" Alias /site_media/ "/sciezka/do/plikow/site_media/" <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE settings PythonPath "['/thomas/django_projects/project'] + sys.path" PythonDebug On </Location> <Location "/site_media"> SetHandler none </Location> How can I make it working for some subdomains like pl.localhost or uk.localhost? This subdomains should display the same page what domain (localhost). Second question: It is possible change default localhost address (http://localhost/) to (http://localhost.com/) or (http://www.localhost.com/) or something else?

    Read the article

  • Apache/Django subdomains problem

    - by Thomas
    Now I have apache configuration which works only with localhost domain (http://localhost/). Alias /media/ "/sciezka/do/instalacji/django/contrib/admin/media/" Alias /site_media/ "/sciezka/do/plikow/site_media/" <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE settings PythonPath "['/thomas/django_projects/project'] + sys.path" PythonDebug On </Location> <Location "/site_media"> SetHandler none </Location> How can I make it working for some subdomains like pl.localhost or uk.localhost? This subdomains should display the same page what domain (localhost). Second question: It is possible change default localhost address (http://localhost/) to (http://localhost.com/) or (http://www.localhost.com/) or something else?

    Read the article

  • Django Auth Model Issue - AUTH_USER_MODEL Not Installed

    - by Ian Warner
    Trying to debug this error with getting a Django project running ImproperlyConfigured: AUTH_USER_MODEL refers to model 'accounts.User' that has not been installed Running python manage.py migrate Must iterate i am in no way a python or django expert - I have simply inherited someone elses project that I am trying to get running for the team here. I have followed steps to install postgres required modules including south creating database for postgres Any help appreciated on how to debug this. settings/base.py contains INSTALLED_APPS = DJANGO_APPS + THIRD_PARTY_APPS + LOCAL_APPS LOCAL_APPS = ( 'apps.core', 'apps.accounts', 'apps.project_tool', 'apps.internal', 'apps.external', ) so apps.accounts exits - but it asks for AUTH_USER_MODEL = 'accounts.User' - should it be AUTH_USER_MODEL = 'apps.accounts.User'?

    Read the article

  • Problems with Snow Leopard, Django & PIL

    - by Cato Johnston
    Hi I am having some trouble getting Django & PIL work properly since upgrading to Snow Leopard. I have installed freetype, libjpeg and then PIL, which tells me: --- TKINTER support ok --- JPEG support ok --- ZLIB (PNG/ZIP) support ok --- FREETYPE2 support ok but when I try to upload a jpeg through the django admin interface I get: Upload a valid image. The file you uploaded was either not an image or a corrupted image. It works fine with PNG files. Any Ideas?

    Read the article

  • Rate limiting Django admin login with Nginx to prevent dictionary attack

    - by shreddies
    I'm looking into the various methods of rate limiting the Django admin login to prevent dictionary attacks. One solution is explained here: simonwillison.net/2009/Jan/7/ratelimitcache/ However, I would prefer to do the rate limiting at the web server side, using Nginx. Nginx's limit_req module does just that - allowing you to specify the maximum number of requests per minute, and sending a 503 if the user goes over: http://wiki.nginx.org/NginxHttpLimitReqModule Perfect! I thought I'd cracked it until I realised that Django admin's login page is not in a consistent place, eg /admin/blah/ gives you a login page at that URL, rather than bouncing to a standard login page. So I can't match on the URL. Can anyone think of another way to know that the admin page was being displayed (regexp the response HTML?)

    Read the article

  • Django ImageField validation & PIL

    - by Zayatzz
    Hello On sunday, I had problems with python modules, when I installed stackless python. Now I have compiled and installed : setuptools & python-mysqldb and i got my django project up and running again. (i also reinstalled django-1.1), Then I compiled and installed, jpeg, freetype2 and PIL. I also started using mod_wsgi instead of mod_python. But when uploading imagefield in form I get validationerror: Upload a valid image. The file you uploaded was either not an image or a corrupted image. Searchmonkey shows that it comes from field.py imagefield validation. before raising this error it imports Image from PIL, opens file and verfies it. I tried importing PIL from python prompt manually - it worked just fine. Same with Image.open and Image.verify. So what could be causing this problem? Alan

    Read the article

  • Django BigInteger auto-increment field as primary key?

    - by Alex Letoosh
    Hi all, I'm currently building a project which involves a lot of collective intelligence. Every user visiting the web site gets created a unique profile and their data is later used to calculate best matches for themselves and other users. By default, Django creates an INT(11) id field to handle models primary keys. I'm concerned with this being overflown very quickly (i.e. ~2.4b devices visiting the page without prior cookie set up). How can I change it to be represented as BIGINT in MySQL and long() inside Django itself? I've found I could do the following (http://docs.djangoproject.com/en/dev/ref/models/fields/#bigintegerfield): class MyProfile(models.Model): id = BigIntegerField(primary_key=True) But is there a way to make it autoincrement, like usual id fields? Additionally, can I make it unsigned so that I get more space to fill in? Thanks!

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >