Search Results

Search found 22358 results on 895 pages for 'django raw query'.

Page 191/895 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • SSMS: The Query Window Keyboard Shortcuts

    Simple-Talk's free wallchart of the most important SSMS keyboard shortcuts aims to help find all those curiously forgettable key combinations within SQL Server Management Studio that unlock the hidden magic that is available for editing and executing queries. The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Use multiple WSGI mount points in Apache with an Nginx reverse proxy

    - by Thomas
    I am trying to set up multiple virtual hosts on the same server with Nginx and Apache and have run into a curious configuration issue. I have nginx is configured with a generic upstream to apache. upstream backend { server 1.1.1.1:8080; } I'm trying to set up multiple subdomains in nginx that hit different mountpoints in apache. Each would act like the following examples. server { listen 80; server_name foo.yoursite.com; location / { proxy_pass http://backend/bar/; include /etc/nginx/proxy.conf; } ... } server { listen 80; server_name delta.yoursite.com; location / { proxy_pass http://backend/gamma/; include /etc/nginx/proxy.conf; } ... } These mountpoints are pointed at django projects, however each of the url entries are coming back prepended with the apache mountpoint path. So, if I called the django url entry for foo.yoursite.com/wiki/biz/, django appears to be returning foo.yoursite.com/bar/wiki/biz/. Similarly, if I call for the url entry for delta.yoursite.com/wiki/biz/, I get delta.yoursite.com/gamma/wiki/biz/. Is there any way get rid of the prefix being returned on the url entries by django and apache?

    Read the article

  • python-social-auth AuthCanceled exception

    - by vero4ka
    I'm using python-social-auth in my Django application for authentication via Facebook. But when a user tries to login and when it's been refirected to Facebook app page clicks on "Cancel" button, appears the following exception: ERROR 2014-01-03 15:32:15,308 base :: Internal Server Error: /complete/facebook/ Traceback (most recent call last): File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/utils.py", line 45, in wrapper return func(request, backend, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/views.py", line 21, in complete redirect_name=REDIRECT_FIELD_NAME, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/actions.py", line 54, in do_complete *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/strategies/base.py", line 62, in complete return self.backend.auth_complete(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 63, in auth_complete self.process_error(self.data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 56, in process_error super(FacebookOAuth2, self).process_error(data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/oauth.py", line 312, in process_error raise AuthCanceled(self, data.get('error_description', '')) AuthCanceled: Authentication process canceled Is the any way to catch it Django?

    Read the article

  • How to localize an app on Google App Engine?

    - by Petri Pennanen
    What options are there for localizing an app on Google App Engine? How do you do it using Webapp, Django, web2py or [insert framework here]. 1. Readable URLs and entity key names Readable URLs are good for usability and search engine optimization (Stack Overflow is a good example on how to do it). On Google App Engine, key based queries are recommended for performance reasons. It follows that it is good practice to use the entity key name in the URL, so that the entity can be fetched from the datastore as quickly as possible. Currently I use the function below to create key names: import re import unicodedata def urlify(unicode_string): """Translates latin1 unicode strings to url friendly ASCII. Converts accented latin1 characters to their non-accented ASCII counterparts, converts to lowercase, converts spaces to hyphens and removes all characters that are not alphanumeric ASCII. Arguments unicode_string: Unicode encoded string. Returns String consisting of alphanumeric (ASCII) characters and hyphens. """ str = unicodedata.normalize('NFKD', unicode_string).encode('ASCII', 'ignore') str = re.sub('[^\w\s-]', '', str).strip().lower() return re.sub('[-\s]+', '-', str) This works fine for English and Swedish, however it will fail for non-western scripts and remove letters from some western ones (like Norwegian and Danish with their œ and ø). Can anyone suggest a method that works with more languages? 2. Translating templates Does Django internationalization and localization work on Google App Engine? Are there any extra steps that must be performed? Is it possible to use Django i18n and l10n for Django templates while using Webapp? The Jinja2 template language provides integration with Babel. How well does this work, in your experience? What options are avilable for your chosen template language? 3. Translated datastore content When serving content from (or storing it to) the datastore: Is there a better way than getting the *accept_language* parameter from the HTTP request and matching this with a language property that you have set with each entity?

    Read the article

  • general learning methodology

    - by momo
    just wanted to hear on the different general learning paths people embark on when learning a new language/framework. the one i currently use, which is how i learned bash and am currently learning python, is: instant hacking tutorial (very short tutorial introducing the basic syntax, variable declaration, loops, data types, etc. and how they are generally used) in depth tutorial with good programming style and slightly topic-specific (e.g. Mark Pilgrim's Dive into Python), important topics for me personally are regex methods, file IO, and ways the different data types are utilized best (i wrote a very primitive bayesian spam filter using python's dictionaries to keep track of word occurrences) spaced-repition of syntax or short recipes (i use anki, with questions like 'create dictionary with filename and filesize metadata, human-readable' or simpler ones like 'match 0 - 3 occurences of the letter M in a string', or 'return/create an iterator from two sequences') the use of spaced-repitition has been invaluable, and i credit it with the ease that i can recall/create python algorithms. however, i've recently started looking into django, and i've found that spaced-repitition, at least in my case, doesn't work very well for learning a framework, it works best with short code recipes (either that or i should start looking into more basic django framework tutorials). the problem i'm encountering is that since framework programming is not only algorithms, but actually learning the API, which can be quite complex since you have to learn all the methods, modules, the places where they are stored, and the sequence of which things have to be done. for ex. in django to start a project that deals with polls (from the django tutorial), one has to create the project, edit the settings.py file, create the polls app, edit the models.py file (which requires knowing the classes that are present in the module models), edit the urls.py file, etc. i found that my spaced-repition method didn't work very well for this type of learning, so i wanted to ask you guys what method(s) you use for learning the different frameworks/APIs.

    Read the article

  • How can i use a commandlinetool (ie. sox) via subprocess.Popen with mod_wsgi?

    - by marue
    I have a custom django filefield that makes use of sox, a commandline audiotool. This works pretty well as long as i use the django development server. But as soon as i switch to the production server, using apache2 and mod_wsgi, mod_wsgi catches every output to stdout. This makes it impossible to use the commandline tool to evaluate the file, for example use it to check if the uploaded file really is an audio file like this: filetype=subprocess.Popen([sox,'--i','-t','%s'%self.path], shell=False,\ stdout=subprocess.PIPE, stderr=subprocess.PIPE) (filetype,error)=filetype.communicate() if error: raise EnvironmentError((1,'AudioFile error while determining audioformat: %s'%error)) Is there a way to workaround for this? edit the error i get is "missing filename". I am using mod_wsgi 2.5, standard with ubuntu 8.04. edit2 What exactly happens, when i call subprocess.Popen from within django in mod_wsgi? Shouldn't subprocess stdin/stdout be independent from django stdin/stdout? In that case mod_wsgi should not affect programms called via subprocess... I'm really confused right now, because the file i am trying to access is a temporary file, created via a filenamevariable that i pass to the file creation and the subprocess command. That file is being written to /tmp, where the rights are 777, so it can't be a rights issue. And the error message is not "file does not exist", but "missing filename", which suggests i am not passing a filename as parameter to the commandlinetool.

    Read the article

  • OperationalError "unable to open database file" processing query results with SQLAlchemy and SQLite3

    - by Peter
    I'm running into this little problem that I hope is just a dumb user error. It looks like some sort of a size limit with a query to a SQLite database. I managed to reproduce the issue with an in-memory DB and a simple script shown below. I can make it work by either reducing the number of records in the DB; or by reducing the size of each record; or by dropping the order_by() call. I am using Python 2.5.5 and SQLAlchemy 0.6.0 in a Cygwin environment. Thanks! #!/usr/bin/python from sqlalchemy.orm import sessionmaker import sqlalchemy import sqlalchemy.orm class Person(object): def __init__(self, name): self.name = name engine = sqlalchemy.create_engine('sqlite:///:memory:') Session = sessionmaker(bind=engine) metadata = sqlalchemy.schema.MetaData(bind=engine) person_table = sqlalchemy.Table('person', metadata, sqlalchemy.Column('id', sqlalchemy.types.Integer, primary_key=True), sqlalchemy.Column('name', sqlalchemy.types.String)) metadata.create_all(engine) sqlalchemy.orm.mapper(Person, person_table) session = Session() session.add_all([Person("012345678901234567890123456789012") for i in range(5000)]) session.commit() persons = session.query(Person).order_by(Person.name).all() print "count =", len(persons) session.close() The all() call to the query result fails with the OperationalError exception: Traceback (most recent call last): File "./stress.py", line 27, in <module> persons = session.query(Person).order_by(Person.name).all() File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1343, in all return list(self) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1451, in __iter__ return self._execute_and_instances(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/query.py", line 1456, in _execute_and_instances mapper=self._mapper_zero_or_none()) File "/usr/lib/python2.5/site-packages/sqlalchemy/orm/session.py", line 737, in execute clause, params or {}) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1109, in execute return Connection.executors[c](self, object, multiparams, params) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1186, in _execute_clauseelement return self.__execute_context(context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1215, in __execute_context context.parameters[0], context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1284, in _cursor_execute self._handle_dbapi_exception(e, statement, parameters, cursor, context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/base.py", line 1282, in _cursor_execute self.dialect.do_execute(cursor, statement, parameters, context=context) File "/usr/lib/python2.5/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.OperationalError: (OperationalError) unable to open database file u'SELECT person.id AS person_id, person.name AS person_name \nFROM person ORDER BY person.name' ()

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

  • 301 Redirect and query strings

    - by icelizard
    I am looking to create a 301 redirect based purely on a query string see b OLD URL: olddomain.com/?pc=/product/9999 New URL: newurl.php?var=yup My normal way of doing this would be redirect 301 pc=/product/9999 newurl.php?var=yup But this time I am trying to match a URL that that only contains the domain and a query string... What is the best way of doing this? Thanks

    Read the article

  • MySQL: how to enable Slow Query Log?

    - by Continuation
    Can you give me an example on how to enable MySQL's slow query log? According to the doc: As of MySQL 5.1.29, use --slow_query_log[={0|1}] to enable or disable the slow query log, and optionally --slow_query_log_file=file_name to specify a log file name. The --log-slow-queries option is deprecated. So how do I use that option? Can I put it in my.cnf? An example would be greatly appreciated. Thank you very much

    Read the article

  • Python Django sites on Apache+mod_wsgi with nginx proxy: highly fluctuating performance

    - by Halfgaar
    I have an Ubuntu 10.04 box running several dozen Python Django sites using mod_wsgi (embedded mode; the faster mode, if properly configured). Performance highly fluctuates. Sometimes fast, sometimes several seconds delay. The smokeping graphs are al over the place. Recently, I also added an nginx proxy for the static content, in the hopes it would cure the highly fluctuating performance. But, even though it reduced the number of requests Apache has to process significantly, it didn't help with the main problem. When clicking around on websites while running htop, it can be seen that sometimes requests are almost instant, whereas sometimes it causes Apache to consume 100% CPU for a few seconds. I really don't understand where this fluctuation comes from. I have configured the mpm_worker for Apache like this: StartServers 1 MinSpareThreads 50 MaxSpareThreads 50 ThreadLimit 64 ThreadsPerChild 50 MaxClients 50 ServerLimit 1 MaxRequestsPerChild 0 MaxMemFree 2048 1 server with 50 threads, max 50 clients. Munin and apache2ctl -t both show a consistent presence of workers; they are not destroyed and created all the time. Yet, it behaves as such. This tells me that once a sub interpreter is created, it should remain in memory, yet it seems sites have to reload all the time. I also have a nginx+gunicorn box, which performs quite well. I would really like to know why Apache is so random. This is a virtual host config: <VirtualHost *:81> ServerAdmin [email protected] ServerName example.com DocumentRoot /srv/http/site/bla Alias /static/ /srv/http/site/static Alias /media/ /srv/http/site/media WSGIScriptAlias / /srv/http/site/passenger_wsgi.py <Directory /> AllowOverride None </Directory> <Directory /srv/http/site> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Ubuntu 10.04 Apache 2.2.14 mod_wsgi 2.8 nginx 0.7.65 Edit: I've put some code in the settings.py file of a site that writes the date to a tmp file whenever it's loaded. I can now see that the site is not randomly reloaded all the time, so Apache must be keeping it in memory. So, that's good, except it doesn't bring me closer to an answer... Edit: I just found an error that might also be related to this: File "/usr/lib/python2.6/subprocess.py", line 633, in __init__ errread, errwrite) File "/usr/lib/python2.6/subprocess.py", line 1049, in _execute_child self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory The server has 600 of 2000 MB free, which should be plenty. Is there a limit that is set on Apache or WSGI somewhere?

    Read the article

  • HP Proliant Servers - WMI query for system health

    - by Mike McClelland
    Hi, I want to query lots of HP servers to determine their overall health. I don't want to use any packages, or even SNMP - I want to query the server health from WMI and understand if a box is Green/Amber/Red - just like the HP Management Home Page. This MUST be possible - but I can't find any documentation... Oh yes, and the servers are running Windows Server 2003/8. Help!! Mike

    Read the article

  • Slow Query log for just one database

    - by Jason
    can I enable the slow query log specifically for just one database? What I've done currently is to take the entire log into excel and then run a pivot report to work out which database is the slowest. So i've gone and done some changes to that application in the hope of reducing the slow query occurence. rather than running my pivot report again which took a bit of time to cleanse the data i'd rather just output slow queries from the one database possible?

    Read the article

  • How do I make a LDAP query-based dynamic distribution group in Exchange 2010

    - by blsub6
    I see that there were ways in Exchange 2003 and Exchange 2007 to just put in an LDAP query and it would populate the group for you. Is there any way to do that in Exchange 2010? I know there's dynamic distribution groups but I don't want to create the group based on one of their pre-set queries and I don't want to mess around with "custom attributes". I just want to put an LDAP query in there and make it run it to populate the distribution group.

    Read the article

  • Replace a SQL Server query with another before execution

    - by Kiranu
    I am trying to work with a legacy application in SQL Server which at some point does the following query SELECT serverproperty('EngineEdition') as sqledition The server replies with 2 (which is the correct edition), but the application closes since the app demands to be run over SQL Server Express which is 4. We don't have access to the code and the developer is long gone. Is there a way to configure SQL Server so that when this query is received it simply returns 4 and not the value of the property? Thanks

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • How to import classes into other classes within the same file in Python

    - by Chris
    I have the file below and it is part of a django project called projectmanager, this file is projectmanager/projects/models.py . Whenever I use the python interpreter to import a Project just to test the functionality i get a name error for line 8 that FileRepo() cannot be found. How Can I import these classes correctly? Ideally what I am looking for is each Project to contain multiple FileRepos which each contain and unknown number of files. Thanks for any assistance in advance. #imports from django.db import models from django.contrib import admin #Project is responsible for ensuring that each project contains all of the folders and file storage #mechanisms a project needs, as well as a unique CCL# class Project(models.Model): ccl = models.CharField(max_length=30) Techpacks = FileRepo() COAS = FileRepo() Shippingdocs = FileRepo() POchemspecs = FileRepo() Internalpos = FileRepo() Finalreports = FileRepo() Batchrecords = FileRepo() RFPS = FileRepo() Businessdev = FileRepo() QA = FileRepo() Updates = FileRepo() def __unicode__(self): return self.ccl #ProjectFile is the file object used by each FileRepo component class ProjectFile(models.Model): file = models.FileField(uploadto='ProjectFiles') def __unicode__(self): return self.file #FileRepo is the model for the "folders" to be used in a Project class FileRepo(models.Model): typeOf = models.CharField(max_length=30) files = models.ManyToManyField(ProjectFile) def __unicode__(self): return self.typeOf

    Read the article

  • FCGI htaccess handler

    - by sharvey
    I'm trying to setup django on a shared hosting provider. I followed the instructions on http://helpdesk.bluehost.com/index.php/kb/article/000531 and almost have it working. The problem I'm facing now is that the traffic is properly routed throught the fcgi file, but the file itself shows up as plain text in the browser. If I run ./mysite.fcgi in the ssh shell, I do get the default django welcome page. my .htaccess is: AddHandler fastcgi-script .fcgi RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] and mysite.fcgi: #!/usr/bin/python2.6 import sys, os os.environ['DJANGO_SETTINGS_MODULE'] = "icm.settings" from django.core.servers.fastcgi import runfastcgi runfastcgi(method="threaded", daemonize="false") thanks.

    Read the article

  • python raw_input odd behavior with accents containing strings

    - by Ryan
    I'm writing a program that asks the user for input that contains accents. The user input string is tested to see if it matches a string declared in the program. As you can see below, my code is not working: code # -*- coding: utf-8 -*- testList = ['má'] myInput = raw_input('enter something here: ') print myInput, repr(myInput) print testList[0], repr(testList[0]) print myInput in testList output in eclipse with pydev enter something here: má mv° 'm\xe2\x88\x9a\xc2\xb0' má 'm\xc3\xa1' False output in IDLE enter something here: má má u'm\xe1' má 'm\xc3\xa1' Warning (from warnings module): File "/Users/ryanculkin/Desktop/delete.py", line 8 print myInput in testList UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal False How can I get my code to print True when comparing the two strings? Additionally, I note that the result of running this code on the same input is different depending on whether I use eclipse or IDLE. Why is this? My eventual goal is to put my program on the web; is there anything that I need to be aware of, since the result seems to be so volatile?

    Read the article

  • Is it bad practice to extend the MongoEngine User document?

    - by Soviut
    I'm integrating MongoDB using MongoEngine. It provides auth and session support that a standard pymongo setup would lack. In regular django auth, it's considered bad practice to extend the User model since there's no guarantee it will be used correctly everywhere. Is this the case with mongoengine.django.auth? If it is considered bad practice, what is the best way to attach a separate user profile? Django has mechanisms for specifying an AUTH_PROFILE_MODULE. Is this supported in MongoEngine as well, or should I be manually doing the lookup?

    Read the article

  • Calling/selecting variables (float valued) with user input in Python

    - by Jonathan Straus
    I've been working on a computational physics project (plotting related rates of chemical reactants with respect to eachother to show oscillatory behavior) with a fair amount of success. However, one of my simulations involves more than two active oscillating agents (five, in fact) which would obviously be unsuitable for any single visual plot... My scheme was hence to have the user select which two reactants they wanted plotted on the x-axis and y-axis respectively. I tried (foolishly) to convert string input values into the respective variable names, but I guess I need a radically different approach if any exist? If it helps clarify any, here is part of my code: def coupledBrusselator(A, B, t_trial,display_x,display_y): t = 0 t_step = .01 X = 0 Y = 0 E = 0 U = 0 V = 0 dX = (A) - (B+1)*(X) + (X**2)*(Y) dY = (B)*(X) - (X**2)*(Y) dE = -(E)*(U) - (X) dU = (U**2)*(V) -(E+1)*(U) - (B)*(X) dV = (E)*(U) - (U**2)*(V) array_t = [0] array_X = [0] array_Y = [0] array_U = [0] array_V = [0] while t <= t_trial: X_1 = X + (dX)*(t_step/2) Y_1 = Y + (dY)*(t_step/2) E_1 = E + (dE)*(t_step/2) U_1 = U + (dU)*(t_step/2) V_1 = V + (dV)*(t_step/2) dX_1 = (A) - (B+1)*(X_1) + (X_1**2)*(Y_1) dY_1 = (B)*(X_1) - (X_1**2)*(Y_1) dE_1 = -(E_1)*(U_1) - (X_1) dU_1 = (U_1**2)*(V_1) -(E_1+1)*(U_1) - (B)*(X_1) dV_1 = (E_1)*(U_1) - (U_1**2)*(V_1) X_2 = X + (dX_1)*(t_step/2) Y_2 = Y + (dY_1)*(t_step/2) E_2 = E + (dE_1)*(t_step/2) U_2 = U + (dU_1)*(t_step/2) V_2 = V + (dV_1)*(t_step/2) dX_2 = (A) - (B+1)*(X_2) + (X_2**2)*(Y_2) dY_2 = (B)*(X_2) - (X_2**2)*(Y_2) dE_2 = -(E_2)*(U_2) - (X_2) dU_2 = (U_2**2)*(V_2) -(E_2+1)*(U_2) - (B)*(X_2) dV_2 = (E_2)*(U_2) - (U_2**2)*(V_2) X_3 = X + (dX_2)*(t_step) Y_3 = Y + (dY_2)*(t_step) E_3 = E + (dE_2)*(t_step) U_3 = U + (dU_2)*(t_step) V_3 = V + (dV_2)*(t_step) dX_3 = (A) - (B+1)*(X_3) + (X_3**2)*(Y_3) dY_3 = (B)*(X_3) - (X_3**2)*(Y_3) dE_3 = -(E_3)*(U_3) - (X_3) dU_3 = (U_3**2)*(V_3) -(E_3+1)*(U_3) - (B)*(X_3) dV_3 = (E_3)*(U_3) - (U_3**2)*(V_3) X = X + ((dX + 2*dX_1 + 2*dX_2 + dX_3)/6) * t_step Y = Y + ((dX + 2*dY_1 + 2*dY_2 + dY_3)/6) * t_step E = E + ((dE + 2*dE_1 + 2*dE_2 + dE_3)/6) * t_step U = U + ((dU + 2*dU_1 + 2*dY_2 + dE_3)/6) * t_step V = V + ((dV + 2*dV_1 + 2*dV_2 + dE_3)/6) * t_step dX = (A) - (B+1)*(X) + (X**2)*(Y) dY = (B)*(X) - (X**2)*(Y) t_step = .01 / (1 + dX**2 + dY**2) ** .5 t = t + t_step array_X.append(X) array_Y.append(Y) array_E.append(E) array_U.append(U) array_V.append(V) array_t.append(t) where previously display_x = raw_input("Choose catalyst you wish to analyze in the phase/field diagrams (X, Y, E, U, or V) ") display_y = raw_input("Choose one other catalyst from list you wish to include in phase/field diagrams ") coupledBrusselator(A, B, t_trial, display_x, display_y) Thanks!

    Read the article

  • How do I get user input to refer to a variable in Python?

    - by somefreakingguy
    I would like to get user input to refer to some list in my code. I think it's called namespace? So, what would I have to do to this code for me to print whatever the user inputs, supposing they input 'list1' or 'list2'? list1 = ['cat', 'dog', 'juice'] list2 = ['skunk', 'bats', 'pogo stick'] x = raw_input('which list would you like me to print?') I plan to have many such lists, so a series of if...then statements seems unruly.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >