Search Results

Search found 33257 results on 1331 pages for 'django database'.

Page 60/1331 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Apache/Django subdomains problem

    - by thomasgg
    Now I have apache configuration which works only with localhost domain (http://localhost/). Alias /media/ "/sciezka/do/instalacji/django/contrib/admin/media/" Alias /site_media/ "/sciezka/do/plikow/site_media/" <Location "/"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE settings PythonPath "['/thomas/django_projects/project'] + sys.path" PythonDebug On </Location> <Location "/site_media"> SetHandler none </Location> How can I make it working for some subdomains like pl.localhost or uk.localhost? This subdomains should display the same page what domain (localhost). Second question: It is possible change default localhost address (http://localhost/) to (http://localhost.com/) or (http://www.localhost.com/) or something else?

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Running django custom management commands with supervisord

    - by mfsaint
    I'd like to use supervisord to run some commands for my Django project but I keep getting the following error: supervisor.log: 2012-05-18 17:52:15,784 INFO spawnerr: can't find command 'source' If I remove the "source" command, the log shows the same error: can't find command 'python'. supervisord.conf excerpt: [program:django] directory=/home/mf/projects/djangopj/ command=beanstalkd -l 127.0.0.1 -p 11300 command=source /home/mf/virtualenvs/env/bin/activate command=python manage.py command1 command=python manage.py command2 user=mf autostart=true autorestart=true I tried removing the directory and adding the absolute path to the commands but I kept getting the same error. I run supervisord with the following command: supervisord -c supervisord.conf -l supervisor.log

    Read the article

  • Apache, Django with mod_wsgi, and large request buffering

    - by Mukul
    In my setup of Apache 2.2 MPM worker and Django 1.3 with mod_wsgi 2.8, I need to support large POST request payloads. The problem is that when there are many such simultaneous requests, Apache uses up all the memory in the system and then crashes. It seems that Apache is buffering the requests completely in memory before executing the WSGI handler and passing it the request. Is there any way to control request buffering in Apache? The log shows the following error whenever the crash happens: [Wed Jun 29 18:35:27 2011] [error] cgid daemon process died, restarting Here's my virtual host's configuration: <VirtualHost *:8080> ServerName example.com ErrorLog /var/log/apache2/error.log WSGIScriptAlias / <path to django.wsgi> WSGIPassAuthorization on WSGIDaemonProcess example.com WSGIProcessGroup example.com XSendFileAllowAbove on XSendFile on </VirtualHost>

    Read the article

  • Nginx / uWsgi / Django site can handle more traffic with rewrite URL

    - by Ludo
    Hi there. I'm running a Django app, using uWsgi behind Nginx. I've been doing some performance tuning and load testing using ApacheBench and have discovered something unexpected which I wonder if someone could explain for me. In my Nginx config I have a rewrite directive which catches lots of different URL permutations and then forwards them to the canonical URL I wish to use, eg, it traps www.mysite.com/whatever, www.mysite.co.uk/whatever and forwards them all to http://mysite.com/whatever. If I load test against any of the URLs listed with a redirect (ie, NOT the canonical URL which it is eventually forwarded to), it can serve 15000 concurrent connections without breaking a sweat. If I load test against the canonical URL, which the above test I would have expected got forwarded to anyway, it can't handle nearly as much. It will drop about 4000 of the 15000 requests, and can only handle about 9000 reliably. This is the command line I'm using to test: ab -c15000 -n15000 http://www.mysite.com/somepath/ and ab -c15000 -n15000 http://mysite.com/somepath/ I've tried several different types - it makes no different which order I do them in. This doesn't make sense to me - I can understand why the requests involving a redirect may not handle quite so many concurrent connections, but it's happening the other way round. Can anyone explain? I'd really prefer it if the canonical URL was the one which could handle more traffic. I'll post my Nginx config below. Thanks loads for any help! server { server_name www.somesite.com somesite.net www.somesite.net somesite.co.uk www.somesite.co.uk; rewrite ^(.*) http://somesite.com$1 permanent; } server { root /home/django/domains/somesite.com/live/somesite/; server_name somesite.com somesite-live.myserver.somesite.com; access_log /home/django/domains/somesite.com/live/log/nginx.log; location / { uwsgi_pass unix:////tmp/somesite-live.sock; include uwsgi_params; } location /media { try_files $uri $uri/ /index.html; } location /site_media { try_files $uri $uri/ /index.html; } location = /favicon.ico { empty_gif; } }

    Read the article

  • Web Server for SVN+PHP+Django+Rails

    - by NetStudent
    Foreword: I am not asking for the differences between Nginx and Apache, nor do I want to start a "which one is better discussion. I would like to ask for help with choosing the most adequate solution for this particular situation. I need to setup one or more l SVN repositories accessible via HTTP, plus some PHP, Django and Ruby websites. However, and since I only have 512Mb of RAM at my disposal, I fear that Apache will be a too heavy choice... On the other hand, I have heard that Nginx does not fully support SVN (WebDAV) and Django without reverse proxying to Apache. Is this still true? Should I go for Apache/Nginx alone? Or should I set up both and have Nginx handling static content and proxying to Apacge for dynamic content?

    Read the article

  • Django freezes when adding objects through the admin

    - by Quartz
    I have a Django 1.1 website running via Apache/mod_wsgi with a PostgreSQL 8.3.1 database. Recently, when I added objects through the admin interface, the connection froze up and I lost several worker processes, so I had to restart Apache. Upon trying to replicate this, I found that it only happens through the admin: if I go into the Django shell and issue the same insert, it works fine. Also, performing an UPDATE operation works without issues, so just INSERTs. I've rebuilt indexes on PostgreSQL and run a full VACUUM. Error logs don't show anything, and I can't figure out for the life of me what's wrong. Anyone have any ideas?

    Read the article

  • NGinX config for Django and Wordpress in subdirectory

    - by Helmut
    I need to set up a Django site at the root of a domain, but then have a Wordpress installation in a subdirectly (e.g. /blog/). How would one configure NGinX to do this? "Pretty" URLs have to work for Wordpress as well. For Django I am using Gunicorn, which is already configured. From NGinX I would call "proxy_pass" to direct to that. PHP is run via FPM. Considering the restrictions above, how would I configure NGinX? Any help would be appreciated! Thanks.

    Read the article

  • Django - Moving database from development to production servers

    - by Garfonzo
    I am working on a Django project with a MySQL backend. I'm curious about the best way to update a production server's database to reflect the changes made on the development server's database? When I develop now, I make some changes to a models.py file, then to a schemamigration using South. Sometimes I do several migrations across several apps within the main project folder before it's ready for the production database. This means that there are several migration files in the app/migrations/ folder created by South. So on the production server, how does one update the database to reflect all the changes made in development, without having any data loss?

    Read the article

  • Optimising Database Mirroring over WAN

    - by blakmk
      I recently got asked by our network guys about botlenecks in the WAN that used for mirroring to our DR I site. They asked me to turn off encryption of Database Mirroring so that the riverbed software  they were using could optimise the packets sent over the WAN. I was a bit sceptical at first about the security risks, but it seems the riverbed software has its own form of obfuscation making the packets difficult to read. After reading an article by rusanu I realised that it could be done with minimal downtime and potential reducing network traffic by 5-10% on its own. After turning off encryption I was pleasantly suprised to see that overall network traffic for mirroring dropped by a whopping 75%!                                               This Web Page Created with PageBreeze Free HTML Editor

    Read the article

  • Oracle Database Appliance - How to Sell a Unique Product : Webcast Replay

    - by Cinzia Mascanzoni
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Learn about: ODA Benefits : Fast, Easy, Cost Efficient, Highly Reliable Feedback from early Customer Wins : What can we learn? Objection Handling : Overcoming the most common customer questions Going beyond the Database: The ODA Eco System for applications, backup & more If you missed the  webcasts in April, go on the EMEA VAD Resource Center - Enablement Tab, click here and follow the instruction to access the replay.

    Read the article

  • I need an approach to the problem of preventing inserting duplicate records into the database

    - by Maurice
    Apologies is this question is asked on the incorrect "stack" A webservice that I call returns a list of data. The data from the webservice is updated periodically, so a call to the webservice done in one hour could return the same data as a call done in an hour. Also, the data is returned based on a start and end date. We have multiple users that can run the webservice search, and duplicate data is most likely to be returned (especially for historical data). However I don't want to insert this duplicate data in the database. I've created a db table in which the data is stored (most important columns are) Id int autoincrement PK Date date not null --The date to which the data set belongs. LastUpdate date not null --The date the data set was last updated. UserName varchar(50) --The name of the user doing the search. I use sql server 2008 express with c# 4.0 and visual studio 2010. Entity Framework is used as the ORM. If stored procedures could be avoided in the proposed solution, then that will be a plus. Another way of looking interpreting what I'm asking a solution for is as follows: I have a million unique records in my table. A user does a new search. The search results from the user contains around 300k of the data that is already in the db. An efficient solution to finding an inserting only the unique records is needed.

    Read the article

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • Looking to create website that can have custom GUI and database per user

    - by riley3131
    I have developed an MS Access database for a company to track data in regards to production of a certain commodity. It has many many tables, forms, reports, etc. These were all done as the user requested, and resemble the users previously used system, mostly printed worksheets and excel workbooks. This has created a central location for all information and has allowed the company to compare data in a new way. I am now looking to do this for other companies, but would like to switch it to a web application. Here is my question. What is the best way to create unique solutions for individual companies that can have around 100 users each? I would love to create one site that would serve all parties, but that would ruin the customizable nature of what I am developing. I love the ability to create reports, excel sheets, pdf, graphs, etc with access, but am tired of relying on my customers software, servers, etc. I have some experience with WAMP, but I am far better at VBA. I was okay at PHP, and was getting a grasp on JavaScript a few years back. I am also trying to decide whether to go with WAMP or LAMP, if web is the best choice. Also, should I try set up one site for all users that after log-in goes to company specific pages, or individual sites for each company? Should I host or use a service?

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • ???????/???Web????????????!??????????????

    - by Yusuke.Yamamoto
    ????? ??:2010/11/04 ??:??????/?? ???????????????????·????????????????????????????????????????????????????????Web?????????????????????????????????????????????????????DB?? Oracle TimesTen In-Memory Database ???????????????????????????? ????????????????????????Oracle In-Memory Database Cache ?????Oracle TimesTen IMDB / Oracle IMDB Cache 11g ?????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/TT11041100.wmv http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/TimesTen_OrD_20101104_print.pdf

    Read the article

  • Good embedded database solution (like SQLite) for .Net

    - by vfilby
    I am looking for file based storage solutions that I can use with a .Net project. THey need to have a sql-like interface for storing and retrieving data. They need to have relatively little overhead and must not require any additional components installed by the end user. I am hopping for a .dll that I can reference and use. Cool points awarded if it is closely tied to an ORM. My current favourite is SQLite, are there any better ones out there that I should know about? I have a (health?) bias against access because I feel it is overcomplicated for what I need, I am open to being convinced otherwise though. PS: "No, there is nothing better than SQLite" is a perfectly good answer.

    Read the article

  • how to generate dbml file from Sybase database?

    - by 5YrsLaterDBA
    I think we may have trouble with our existing project. For some reasons we have to switch from SQL Server to Sybase SQL Anywhere 11. now we trying to find a way continue use our existing LINQ code. We wish we can still use L2S? If cannot, we wish we can use L2E, then we have to change to ADO. how to generate dbml file from Sybase Anywhere 11? after that can we use sqlmetal to generate .cs files?

    Read the article

  • Non-relational database modeling tool?

    - by Angel Escobedo
    Hey guys, please recommend some tools you have used succesfully on DW, DataMart, BI an non-relational modeling. Example for automatic creation of snow-flake Schemas, dimensions and facts tables. Wich tools makes you sense familiarity with the diagrams and surrogates keys and it will have the option for export or connect to SQL Server 2008. Thanks

    Read the article

  • Is it easy to switch from relational to non-relational databases with Rails?

    - by Tam
    Good day, I have been using Rails/Mysql for the past while but I have been hearing about Cassandra, MongoDB, CouchDB and other document-store DB/Non-relational databases. I'm planning to explore them later as they might be better alternative for scalability. I'm planning to start an application soon. Will it make a different with Rails design if I move from relational to non-relational database? I know Rails migrations are database-agnostic but wasn't sure if moving to non-relational will make difference with design or not.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Database Design Primay Key, ID vs String

    - by LnDCobra
    Hi, I am currently planning to develop a music streaming application. And i am wondering what would be better as a primary key in my tables on the server. An ID int or a Unique String. Methods 1: Songs Table: SongID(int), Title(string), Artist*(string), Length(int), Album*(string) Genre Table Genre(string), Name(string) SongGenre: SongID*(int), Genre*(string) Method 2 Songs Table: SongID(int), Title(string), ArtistID*(int), Length(int), AlbumID*(int) Genre Table GenreID(int), Name(string) SongGenre: SongID*(int), GenreID*(int) Key: Bold = Primary Key, Field* = Foreign Key I'm currently designing using method 2 as I believe it will speed up lookup performance and use less space as an int takes a lot less space then a string. Is there any reason this isn't a good idea? Is there anything I should be aware of?

    Read the article

  • Need help to create database schema for wholesale online tee store

    - by techiepark
    Hi, I'm currently working on wholesale online t-shirt shop. I have done this for fixed quantity and price, and its working fine. Now i need to do this for variable quantity and price. Here is the reference link, like what i have to do. Basic tables i have created are - CREATE TABLE attribute ( attribute_id int(11) NOT NULL auto_increment, name varchar(100) NOT NULL, PRIMARY KEY (attribute_id) ); CREATE TABLE attribute_value ( attribute_value_id int(11) NOT NULL auto_increment, attribute_id int(11) NOT NULL, value varchar(100) NOT NULL, PRIMARY KEY (attribute_value_id), KEY idx_attribute_value_attribute_id (attribute_id) ); CREATE TABLE product ( product_id int(11) NOT NULL auto_increment, name varchar(100) NOT NULL, description varchar(1000) NOT NULL, price decimal(10,2) NOT NULL, image varchar(150) default NULL, thumbnail varchar(150) default NULL, PRIMARY KEY (product_id), FULLTEXT KEY idx_ft_product_name_description (name,description) ); CREATE TABLE product_attribute ( product_id int(11) NOT NULL, attribute_value_id int(11) NOT NULL, PRIMARY KEY (product_id,attribute_value_id) ); I'm not getting how to store the price based on variable quantity. Please help me to create product and its related tables. my requirement is same as above reference link.

    Read the article

  • Buddy List: Relational Database Table Design

    - by huntaub
    So, the modern concept of the buddy list: Let's say we have a table called Person. Now, that Person needs to have many buddies (of which each buddy is also in the person class). The most obvious way to construct a relationship would be through a join table. i.e. buddyID person1_id person2_id 0 1 2 1 3 6 But, when a user wants to see their buddy list, the program would have to check the column 'person1_id' and 'person2_id' to find all of their buddies. Is this the appropriate way to implement this kind of table, or would it be better to add the record twice.. i.e. buddyID person1_id person2_id 0 1 2 1 2 1 So that only one column has to be searched. Thanks in advance.

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >