Search Results

Search found 33257 results on 1331 pages for 'django database'.

Page 18/1331 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Django admin interface upload failing on request data read error

    - by Jake
    Hi All, This is an updated version of an old question I asked. I've now done a lot more testing, plus the old question got hijacked. I'm getting a request data read error when trying to upload files to the Django admin interface. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) IOError: request data read error I've tried Chrome and Firefox on Windows and Firefox on Mac - Same results. I can upload to other sites so I don't think it's my connection. I'm running python 2.4, django 1.1, mod_wsgi, on CentOS (a media temple DV server) Locally it's fine (Django development server) Everything I've found on this issue says it's a mod_python issue and that changing to mod_wsgi will fix it, but I am running mod_wsgi. Can anyone help?

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • AddThis Social SignIn and Django

    - by piokuc
    I am developing a Django website. I've been using django-registration for user registration so far but I would really like to allow users to login to my site using their Facebook, Twitter, Google, etc accounts. I am using addthis sharing buttons. I just noticed they introduced a social sign in solution. The idea seems great, you integrate your authentication system with their service once, and your users can login via all of the popular social networking sites. Has anybody integrated addthis social signin plugin with a django website? How can you use it along side django-registration? Are there any similar, alternative solutions?

    Read the article

  • Django admin output extra HTML in ModelSite

    - by VoteyDisciple
    Ultimately, I want to add an <iframe> to the display of a particular model on Django's admin page. Django is already rendering the form for this model correctly, but I want to add this <iframe> in addition to Django's form. The src attribute needs to involve the primary key for the currently-displayed record. I've learned how to properly override the change_form.html template through Django's documentation, and I can add markup to the right block, but I can't figure out how to access the primary key value. (No amount of determined Googling has helped at all.) Alternatively, is there a direct way to specify that I want to produce extra output in my ModelSite definition?

    Read the article

  • Problem with anchor tags in Django after using lighttpd + fastcgi

    - by Drew A
    I just started using lighttpd and fastcgi for my django site, but I've noticed my anchor links are no longer working. I used the anchor links for sorting links on the page, for example I use an anchor to sort links by the number of points (or votes) they have received. For example: the code in the html template: ... {% load sorting_tags %} ... {% ifequal sort_order "points" %} {% trans "total points" %} {% trans "or" %} {% anchor "date" "date posted" %} {% order_by_votes links request.direction %} {% else %} {% anchor "points" "total points" %} {% trans "or" %} {% trans "date posted" %} ... The anchor link on "www.mysite.com/my_app/" for total points will be directed to "my_app/?sort=points" But the correct URL should be "www.mysite.com/my_app/?sort=points" All my other links work, the problem is specific to anchor links. The {% anchor %} tag is taken from django-sorting, the code can be found at http://github.com/directeur/django-sorting Specifically in django-sorting/templatetags/sorting_tags.py Thanks in advance.

    Read the article

  • csrf error in django

    - by niklasfi
    Hello, I want to realize a login for my site. I basically copied and pasted the following bits from the Django Book together. However I still get an error (CSRF verification failed. Request aborted.), when submitting my registration form. Can somebody tell my what raised this error and how to fix it? Here is my code: views.py: # Create your views here. from django import forms from django.contrib.auth.forms import UserCreationForm from django.http import HttpResponseRedirect from django.shortcuts import render_to_response def register(request): if request.method == 'POST': form = UserCreationForm(request.POST) if form.is_valid(): new_user = form.save() return HttpResponseRedirect("/books/") else: form = UserCreationForm() return render_to_response("registration/register.html", { 'form': form, }) register.html: <html> <body> {% block title %}Create an account{% endblock %} {% block content %} <h1>Create an account</h1> <form action="" method="post">{% csrf_token %} {{ form.as_p }} <input type="submit" value="Create the account"> </form> {% endblock %} </body> </html>

    Read the article

  • Django URL Conf Returns Incorrect "Current URL"

    - by natnit
    I have a django app that is mostly done, and the URLs work perfectly when I run it with the manage.py runserver command. However, I've recently tried to get it running via lighttpd, and many links have stopped working. For example: http://mysite.com/races/32 should work, but instead throws this error message. Page not found (404) Request Method: GET Request URL: http://mysite.com/races/32 Using the URLconf defined in racetrack.urls, Django tried these URL patterns, in this order: ^admin/ ^create/$ ^races/$ ^races/(?P<race_id>\d+)/$ ^races/(?P<race_id>\d+)/manage/$ ^races/(?P<text>\w+)/$ ^user/(?P<kol_id>\d+)/$ ^$ ^login/$ ^logout/$ The current URL, 32, didn't match any of these. The request URL is accurate, but the last line (which displays the current URL) is giving 32 instead of races/32 as expected. Here is my urlconf: from django.conf.urls.defaults import * from django.contrib import admin admin.autodiscover() urlpatterns = patterns('racetrack.races.views', (r'^admin/', include(admin.site.urls)), (r'^create/$', 'create'), (r'^races/$', 'index'), (r'^races/(?P<race_id>\d+)/$', 'detail'), (r'^races/(?P<race_id>\d+)/manage/$', 'manage'), (r'^races/(?P<text>\w+)/$', 'index'), (r'^user/(?P<kol_id>\d+)/$', 'user'), # temporary for index page replace with welcome page (r'^$', 'index'), ) urlpatterns += patterns('django.contrib.auth.views', (r'^login/$', 'login', {'template_name': 'races/login.html'}), (r'^logout/$', 'logout', {'next_page': '/'}), ) Thank you.

    Read the article

  • Doing a count over a filter query efficiently in django

    - by apple_pie
    Hello, Django newbie here, I need to do a count over a certain filter in a django model. If I do it like so: my_model.objects.filter(...).count() I'm guessing it does the SQL query that retrieves all the rows and only afterwards does the count. To my knowledge it's much more efficient to do the count without retrieving those rows like so "SELECT COUNT(*) FROM ...". Is there a way to do so in django?

    Read the article

  • customizing Django look and feel in Python

    - by user248237
    I am learning Django and got it to work with wsgi. I'm following the tutorial here: http://docs.djangoproject.com/en/1.1/intro/tutorial01/ My question is: how can I customize the look and feel of Django? Is there a repository of templates that "look good", kind of like there are for Wordpress, that I can start from? I find the tutorial counterintuitive in that it goes immediately toward customizing the admin page of Django, rather than the main pages visible to users of the site. Is there an example of a "typical" Django site, with a decent template, that I can look at and built on/modify? The polls application is again not very representative since it's so specialized. any references on this would be greatly appreciated. thanks.

    Read the article

  • How to migrate Django models from mysql to sqlite (or between any two database systems)?

    - by Daphna Shezaf
    I have a Django deployment in production that uses MySQL. I would like to do further development with SQLite, so I would like to import my existing data to an SQLite database. I There is a shell script here to convert a general MySQL dump to SQLite, but it didn't work for me (apparently the general problem isn't easy). I figured doing this using the Django models must be much easier. How would you do this? Does anyone have any script to do this?

    Read the article

  • Oracle Database Security Protecting the Oracle IRM Schema

    - by Simon Thorpe
    Acquiring the Information Rights Management technology in 2006 was part of Oracle's strategic security vision and IRM compliments nicely the overall Oracle security set of solutions. A year ago I spoke about how Oracle has solutions that can help companies protect information throughout its entire life cycle. With our acquisition of Sun this set of solutions has solidified and has even extended down to the operating system and hardware level. Oracle can now offer customers technology that protects their data from the disk, through the database to documents on the desktop! With the recent release of Oracle IRM 11g I was tasked to configure demonstration and evaluation environments and I thought it would make a nice story to leverage some of the security features in the latest release of the Oracle Database. After building these environments I thought I would put together a simple video demonstrating how both Database Advanced Security and Information Rights Management combined can provide a very secure platform for protecting your information. Have a look at the following which highlights these database security options.Transparent Data Encryption protecting the communication from the Oracle IRM server to the Database server. Encryption techniques provide confidentiality and integrity of the data passing to and from the IRM service on the back end. Transparent Data Encryption protecting the Oracle IRM database schema. Encryption is used to provide confidentiality of the IRM data whilst it resides at rest in the database table space. Database Vault is used to ensure only the Oracle IRM service has access to query and update the information that resides in the database. This is an excellent method of ensuring that database administrators cannot look at or make changes to the Oracle IRM database whilst retaining their ability to administrate the database. The last thing you want after deploying an IRM solution is for a curious or unhappy DBA to run a query that grants them rights to your company financial data or documents pertaining to a merger or acquisition.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Database with "Open Schema" - Good or Bad Idea?

    - by Claudiu
    The co-founder of Reddit gave a presentation on issues they had while scaling to millions of users. A summary is available here. What surprised me is point 3: Instead, they keep a Thing Table and a Data Table. Everything in Reddit is a Thing: users, links, comments, subreddits, awards, etc. Things keep common attribute like up/down votes, a type, and creation date. The Data table has three columns: thing id, key, value. There’s a row for every attribute. There’s a row for title, url, author, spam votes, etc. When they add new features they didn’t have to worry about the database anymore. They didn’t have to add new tables for new things or worry about upgrades. This seems like a terrible idea to me, but it seems to have worked out for Reddit. Is it a good idea in general, though? Or is it a peculiarity of Reddit that happened to work out for them?

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • Database design grouping contacts by lists and companies

    - by Serge
    Hi, I'm wondering what would be the best way to group contacts by their company. Right now a user can group their contacts by custom created lists but I'd like to be able to group contacts by their company as well as store the contact's position (i.e. Project Manager of XYZ company). Database wise this is what I have for grouping contacts into lists contact [id_contact] [int] PK NOT NULL, [lastName] [varchar] (128) NULL, [firstName] [varchar] (128) NULL, ...... contact_list [id_contact] [int] FK, [id_list] [int] FK, list [id_list] [int] PK [id_user] [int] FK [list_name] [varchar] (128) NOT NULL, [description] [TEXT] NULL Should I implement something similar for grouping contacts by company? If so how would I store the contact's position in that company and how can I prevent data corruption if a user modifies a contact's company name. For instance John Doe changed companies but the other co-workers are still in the old company. I doubt that will happen often (might not even happen at all) but better be safe than sorry. I'm also keeping an audit trail so in a way the contact would still need to be linked to the old company as well as the new one but without confusing what company he's actually working at the moment. I hope that made sense... Has anyone encountered such a problem? UPDATE Would something like this make sense contact_company [id_contact_company] [int] PK [id_contact] [int] FK [id_company] [int] FK [contact_title] [varchar] (128) company [id_company] [int] PK NOT NULL, [company_name] [varchar] (128) NULL, [company_description] [varchar] (300) NULL, [created_date] [datetime] NOT NULL This way a contact can work for more than one company and contacts can be grouped by companies

    Read the article

  • Apache cannot find mysql database modules

    - by user809857
    I've created a simple django project and setup a mysql database. My simple project just creates an entry on the database. The project works fine when I use the built in development server provided by django (runserver) and it works well. But when I deployed the project on Apache and mod_Wsgi (Ubuntu server), django could not find 'books', which is in this case my table in the database. The mysql database that I use in runserver and apache are just the same. I also did rebuild the database using sqlall,validate and syncdb of django but still i get the error. What could be wrong with what I'm doing? Thanks

    Read the article

  • Apache cannot find mysql database modules

    - by user809857
    I've created a simple django project and setup a mysql database. My simple project just creates an entry on the database. The project works fine when I use the built in development server provided by django (runserver) and it works well. But when I deployed the project on Apache and mod_Wsgi (Ubuntu server), django could not find 'books', which is in this case my table in the database. The mysql database that I use in runserver and apache are just the same. I also did rebuild the database using sqlall,validate and syncdb of django but still i get the error. What could be wrong with what I'm doing? Thanks

    Read the article

  • Is backing up a MySQL database in GIT a good idea?

    - by wobbily_col
    I am trying to improve the backup situation for my application. I have a Django application and MySQL database. I read an article suggesting backing up the database in Git. On the one hand I like it, as it will keep a copy of the data and the code in synch. But GIT is a designed for code, not for data. As such it will be doing a lot of extra work diffing the mysql dump every commit, which is not really necessary. If I compress the file before storing it, will git diff the files? (The dump file is currently 100MB uncompressed, 5.7Mb when bzipped). Edit: the code and database schema definitions are already in GIT, it is really the data I am concerned about backing up now.

    Read the article

  • Can django's auth_user.username be varchar(75)? How could that be done?

    - by perrierism
    Is there anything wrong with running alter table on auth_user to make username be varchar(75) so it can fit an email? What does that break if anything? If you were to change auth_user.username to be varchar(75) where would you need to modify django? Is it simply a matter of changing 30 to 75 in the source code?: username = models.CharField(_('username'), max_length=30, unique=True, help_text=_("Required. 30 characters or fewer. Letters, numbers and @/./+/-/_ characters")) Or is there other validation on this field that would have to be changed or any other repercussions to doing so? See comment discussion with bartek below regarding the reason for doing it.

    Read the article

  • Adding links to full change forms for inline items in django admin?

    - by David Eyk
    I have a standard admin change form for an object, with the usual StackedInline forms for a ForeignKey relationship. I would like to be able to link each inline item to its corresponding full-sized change form, as the inline item has inlined items of its own, and I can't nest them. I've tried everything from custom widgets to custom templates, and can't make anything work. So far, the "solutions" I've seen in the form of snippets just plain don't seem to work for inlines. I'm getting ready to try some DOM hacking with jQuery just to get it working and move on. I hope I must be missing something very simple, as this seems like such a simple task! Using Django 1.2.

    Read the article

  • How do I flag only one of the formsets in django admin?

    - by azuer88
    I have these (simplified) models: class Question(models.Model): question = models.CharField(max_length=60) class Choices(models.Model): question = models.ForeignKey(Question) text = models.CharField(max_length=60) is_correct = models.BooleanField(default=False) I've made Choices as an inline of Question (in admin). Is there a way to make sure that only one Choice will have is_correct = True? Ideally, is_correct will be displayed as a radio button when it is displayed in the admin formset (TabularInline). my admin.py has: from django.contrib import admin class OptionInline(admin.TabularInline): model = Option extra = 5 max_num = 5 class QuestionAdmin(admin.ModelAdmin): inlines = [OptionInline, ] admin.site.register(QType) admin.site.register(Question, QuestionAdmin)

    Read the article

  • How can I call model methods or properties from Django Admin?

    - by kg
    Is there a natural way to display model methods or properties in the Django admin site? In my case I have base statistics for a character that are part of the model, but other things such as status effects which affect the total calculation for that statistic: class Character(models.Model): base_dexterity = models.IntegerField(default=0) @property def dexterity(stat_name): total = self.base_dexterity total += sum(s.dexterity for s in self.status.all()]) return total It would be nice if I could display the total calculated statistic alongside the field to change the base statistic in the Change Character admin page, but it is not clear to me how to incorporate that information into the page.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >