Search Results

Search found 27118 results on 1085 pages for 'mysql python'.

Page 38/1085 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • How to install the MySQL Ruby Gem on Ubuntu 9.10?

    - by misbehavens
    I am having a problem installing the Ruby Gem for MySQL. This is the command that I am running: sudo gem install mysql and this is the output that I'm getting: Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /usr/bin/ruby1.8 extconf.rb checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.8 --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib --with-mlib --without-mlib --with-mysqlclientlib --without-mysqlclientlib --with-zlib --without-zlib --with-mysqlclientlib --without-mysqlclientlib --with-socketlib --without-socketlib --with-mysqlclientlib --without-mysqlclientlib --with-nsllib --without-nsllib --with-mysqlclientlib --without-mysqlclientlib --with-mygcclib --without-mygcclib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1 for inspection. Results logged to /usr/lib/ruby/gems/1.8/gems/mysql-2.8.1/ext/mysql_api/gem_make.out What do I need to do in order to get this to install?

    Read the article

  • How to use Binary Log file for Auditing and Replicating in MySQL?

    - by Pranav
    How to use Binary Log file for Auditing in MySQL? I want to track the change in a DB using Binary Log so that I can replicate these changes to other DB please do not give me hyperlinks for MySQL website. please direct me to find the solution I have looked for auditing options and created a script using Triggers for that, but due toi the Joomla DB structure it did'nt worked for me, hence I have to move on to Binary Log file concept now i am stucked in initiating the concept as I am not getting the concept of making the server master/slave, so can any body guide me how to actually initiate it via PHP?

    Read the article

  • mssql or mysql: learning

    - by Yehuda
    I have been using MySQL for about 9 months now for websites, and i have become quite good in getting what I want out of the Database. However i am still missing most of the complicated parts. I have an excellent tutorial but it is on sql-server 2008. 1) Is it worth me switching over to mssql (I understand the SQL is different) so that I will learn all about SQL and databases in general? 2) Do most people use MySQL or MSSQL 3) What is best practice, and I am talking mainly for websites.

    Read the article

  • Recent Ubuntu update prevents MySQL root access

    - by Rhys
    I recently updated my Ubuntu (10.04 LTS) server (apt-get update, apt-get upgrade), and everything works fine, apart from the root access to my MySQL database. phpMyAdmin, CakePHP, and essentially all connections return similar access errors. For example, PMA returns 'Connection for controluser as defined in your configuration failed.' I have tried to find similar examples of this instance, but cannot find assistance in what configuration I should be changing to restore root log in access. The same issue has occurred on two servers. One has additional users so I could get around it, but the other is a new development server with only root MySQL access, so I am stuck on how to resolve this.

    Read the article

  • how execute mysql command DELIMITER

    - by user5332
    hi, I have huge problem (for me) I need from PHP execute mysql command DELIMITER | but mysql_query fails on error... and I found that mysql_query doesn't support usage of DELIMITER, because this command may be working only in mysql console but when I open phpMyAdmin ... is there at SQL tab an option to change DELIMITER and it works... but I don't know how... could you help me? who is possile to change delimiter from PHP? I need it to do before CREATE TRIGGER ... that uses several ; that may not be interpreted like command end

    Read the article

  • Mysql not starting - innodb not found

    - by Rob Guderian
    I have a fresh install of ubuntu 12.04 server edition and mysql server is not starting properly. I did a simple apt-get install apt-get install mysql-server But, it's failing with this error message root@test:~# mysqld 120618 20:57:32 [Warning] The syntax '--log-slow-queries' is deprecated and will be removed in a future release. Please use '--slow-query-log'/'--slow-query-log-file' instead. 120618 20:57:32 [Note] Plugin 'FEDERATED' is disabled. 120618 20:57:32 InnoDB: The InnoDB memory heap is disabled 120618 20:57:32 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120618 20:57:32 InnoDB: Compressed tables use zlib 1.2.3.4 120618 20:57:32 InnoDB: Unrecognized value fdatasync for innodb_flush_method 120618 20:57:32 [ERROR] Plugin 'InnoDB' init function returned error. 120618 20:57:32 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120618 20:57:32 [ERROR] Unknown/unsupported storage engine: InnoDB 120618 20:57:32 [ERROR] Aborting I can start the server with the "--skip-innodb --default-storage-engine=myisam" flags, but would like to use innodb. Does anyone know what the issue here is?

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • /dev/sda1 100% Mysql to blame?

    - by SJP
    I have a an API running that receives raw binaries, processes them, and then stores metadata about the bins in a mysql database. I have been running it for a couple days on a VM. Today the API stopped processing the mySQL commands. After using the command df-h the results were: root@mwdb1:/# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 104G 99G 0 100% / udev 16G 4.0K 16G 1% /dev tmpfs 6.3G 364K 6.3G 1% /run none 5.0M 0 5.0M 0% /run/lock none 16G 0 16G 0% /run/shm /dev/sdb1 5.5T 42G 5.2T 1% /data sda1 is at 100%

    Read the article

  • Hosting a Small PHP/MySQL Project [closed]

    - by paulrehkugler
    I have a small PHP/MySQL project in the works and I'm looking for somewhere to host it. My criteria are: Ability to run PHP/MySQL (either natively, or contains the ability to install). Ability to manipulate the web server, so I can make pretty URLs. Not spammy (if you've ever looked for hosting, you know what I mean). Semi-professional - no ridiculous downtime or long response time. I obviously don't need anything spectacular (I'm not aiming to be the next Facebook) but something that doesn't seem cheap. Reasonably priced - obviously this is a side project for fun so I'm not planning on making or dispensing any sort of "serious" revenue.

    Read the article

  • MySQL with mutiple threads and processes

    - by Abhan
    I'm developing a telecom messaging platform in C, and I'm going to need multiple processes to be working with MySQL DB. How can I make two processes read/write to/from a Mysql DB and, if/when one of them goes down, get the other to seamlessly take over the work until the dead process gets back to work? I was thinking/googling some options and am stuck in place where I don't know which one to choose. What I think so far is that table lock is not the best option to go for, as it will stall the other process until the table is unlocked. The other option is to use row-level locks or manual locks, but I can't find the best way to do it.

    Read the article

  • mysql database cannot connect with cpanel [closed]

    - by Rafee
    And this question was asked on http://stackoverflow.com/questions/8182119/mysql-database-cannot-connect-with-cpanel <?php $con1 = mysql_connect("mywebsiteip","mysql_username","mysql_user_password"); if(!$con1) { die ("Could not connect " . mysql_error()); } else { echo "Good connection"; } mysql_close($con1); ?> When i run it, it cannot connect to mysql database over cpanel. and i even tried up $con1 = mysql_connect("mywebsiteip:portnumber","mysql_username","mysql_user_password"); Can any let me know, which one is good way.

    Read the article

  • MySQL prevents server from booting unless password is entered

    - by ZaneKullman
    I am kind of new with Ubuntu but I have been working on setting up a LAMP server with hamachi as a vpn client for management. The issue is that when we turn the server on or restart it we are required to enter the MySQL password before it will continue. Where can we script a password or disable this? I have attached a partial of less /var/log/boot.log * Starting MySQL ServerESC[204G[ OK ] ....... ok Password: If I haven't provided enough information please just comment and Ill try my best.

    Read the article

  • Mysql 100% CPU + Slow query

    - by felipeclopes
    I'm using the RDS database from amazon with a some very big tables, and yesterday I started to face 100% CPU utilisation on the server and a bunch of slow query logs that were not happening before. I tried to check the queries that were running and faced this result from the explain command +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | businesses | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | activities_businesses | ref | PRIMARY,index_activities_users_on_business_id,index_tweets_users_on_tweet_id_and_business_id | index_activities_users_on_business_id | 9 | const | 2252 | Using index condition; Using where | | 1 | SIMPLE | activities_b_taggings_975e9c4 | ref | taggings_idx | taggings_idx | 782 | const,myapp_production.activities_businesses.id,const | 1 | Using index condition; Using where | | 1 | SIMPLE | activities | eq_ref | PRIMARY,index_activities_on_created_at | PRIMARY | 8 | myapp_production.activities_businesses.activity_id | 1 | Using where | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ Also checkin in the process list, I got something like this: +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | 1 | my_app | my_ip:57152 | my_app_production | Sleep | 0 | | NULL | | 2 | my_app | my_ip:57153 | my_app_production | Sleep | 2 | | NULL | | 3 | rdsadmin | localhost:49441 | NULL | Sleep | 9 | | NULL | | 6 | my_app | my_other_ip:47802 | my_app_production | Sleep | 242 | | NULL | | 7 | my_app | my_other_ip:47807 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 8 | my_app | my_other_ip:47809 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 9 | my_app | my_other_ip:47810 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 10 | my_app | my_other_ip:47811 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 11 | my_app | my_other_ip:47813 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | ... So based on the numbers, it looks like there is no reason to have a slow query, since the worst execution plan is the one that goes through 2k rows which is not much. Edit 1 Another information that might be useful is the slow query_log SET timestamp=1401457485; SELECT my_query... # User@Host: myapp[myapp] @ ip-10-195-55-233.ec2.internal [IP] Id: 435 # Query_time: 95.830497 Lock_time: 0.000178 Rows_sent: 0 Rows_examined: 1129387 Edit 2 After profiling, I got this result. The result have approximately 250 rows with two columns each. +----------------------+----------+ | state | duration | +----------------------+----------+ | Sending data | 272 | | removing tmp table | 0 | | optimizing | 0 | | Creating sort index | 0 | | init | 0 | | cleaning up | 0 | | executing | 0 | | checking permissions | 0 | | freeing items | 0 | | Creating tmp table | 0 | | query end | 0 | | statistics | 0 | | end | 0 | | System lock | 0 | | Opening tables | 0 | | logging slow query | 0 | | Sorting result | 0 | | starting | 0 | | closing tables | 0 | | preparing | 0 | +----------------------+----------+ Edit 3 Adding query as requested SELECT activities.share_count, activities.created_at FROM `activities_businesses` INNER JOIN `businesses` ON `businesses`.`id` = `activities_businesses`.`business_id` INNER JOIN `activities` ON `activities`.`id` = `activities_businesses`.`activity_id` JOIN taggings activities_b_taggings_975e9c4 ON activities_b_taggings_975e9c4.taggable_id = activities_businesses.id AND activities_b_taggings_975e9c4.taggable_type = 'ActivitiesBusiness' AND activities_b_taggings_975e9c4.tag_id = 104 AND activities_b_taggings_975e9c4.created_at >= '2014-04-30 13:36:44' WHERE ( businesses.id = 1 ) AND ( activities.created_at > '2014-04-30 13:36:44' ) AND ( activities.created_at < '2014-05-30 12:27:03' ) ORDER BY activities.created_at; Edit 4 There may be a chance that the indexes are not being applied due to difference in column type between the taggings and the activities_businesses, on the taggable_id column. mysql> SHOW COLUMNS FROM activities_businesses; +-------------+------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | activity_id | bigint(20) | YES | MUL | NULL | | | business_id | bigint(20) | YES | MUL | NULL | | +-------------+------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) mysql> SHOW COLUMNS FROM taggings; +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | tag_id | int(11) | YES | MUL | NULL | | | taggable_id | bigint(20) | YES | | NULL | | | taggable_type | varchar(255) | YES | | NULL | | | tagger_id | int(11) | YES | | NULL | | | tagger_type | varchar(255) | YES | | NULL | | | context | varchar(128) | YES | | NULL | | | created_at | datetime | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ So it is examining way more rows than it shows in the explain query, probably because some indexes are not being applied. Do you guys can help m with that?

    Read the article

  • [python] voice communication for python help!

    - by Eric
    Hello! I'm currently trying to write a voicechat program in python. All tips/trick is welcome to do this. So far I found pyAudio to be a wrapper of PortAudio. So I played around with that and got an input stream from my microphone to be played back to my speakers. Only RAW of course. But I can't send RAW-data over the netowrk (due the size duh), so I'm looking for a way to encode it. And I searched around the 'net and stumbled over this speex-wrapper for python. It seems to good to be true, and believe me, it was. You see in pyAudio you can set the size of the chunks you want to take from your input audiobuffer, and in that sample code on the link, it's set to 320. Then when it's encoded, its like ~40 bytes of data per chunk, which is fairly acceptable I guess. And now for the problem. I start a sample program which just takes the input stream, encodes the chunks, decodes them and play them (not sending over the network due testing). If I just let my computer idle and run this program it works great, but as soon as I do something, i.e start Firefox or something, the audio input buffer gets all clogged up! It just grows and then it all crashes and gives me an overflow error on the buffer.. OK, so why am I just taking 320 bytes of the stream? I could just take like 1024 bytes or something and that will easy the pressure on the buffer. BUT. If I give speex 1024 bytes of data to encode/decode, it either crashes and says that thats too big for its buffer. OR it encodes/decodes it, but the sound is very noisy and "choppy" as if it only encoded a tiny bit of that 1024 chunk and the rest is static noise. So the sound sounds like a helicopter, lol. I did some research and it seems that speex only can convert 320 bytes of data at time, and well, 640 for wide-band. But that's the standard? How can I fix this problem? How should I construct my program to work with speex? I could use a middle-buffer tho that takes all available data to read from the buffer, then chunk this up in 320 bits and encode/decode them. But this takes a bit longer time and seems like a very bad solution of the problem.. Because as far as I know, there's no other encoder for python that encodes the audio so it can be sent over the network in acceptable small packages, or? I've been googling for three days now. Also there is this pyMedia library, I don't know if its good to convert to mp3/ogg for this kind of software. Thank in in advance for reading this, hope anyone can help me! (:

    Read the article

  • MySql - Get row number on select

    - by George
    Can I run a select statement and get the row number if the items are sorted? I have a table like this: mysql> describe orders; +-------------+---------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------------------+------+-----+---------+----------------+ | orderID | bigint(20) unsigned | NO | PRI | NULL | auto_increment | | itemID | bigint(20) unsigned | NO | | NULL | | +-------------+---------------------+------+-----+---------+----------------+ I can then run this query to get the number of orders by ID: SELECT itemID, COUNT(*) as ordercount FROM orders GROUP BY itemID ORDER BY ordercount DESC; This gives me a count of each itemID in the table like this: +--------+------------+ | itemID | ordercount | +--------+------------+ | 388 | 3 | | 234 | 2 | | 3432 | 1 | | 693 | 1 | | 3459 | 1 | +--------+------------+ I want to get the row number as well, so I could tell that itemID 388 is the first row, 234 is second, etc (essentially the ranking of the orders, not just a raw count). I know I can do this in java when I get the result set back, but I was wondering if there was a way to handle it purely in SQL.

    Read the article

  • django select max field from mysql when column is varchar

    - by doza
    Hi, Using Django 1.1, I am trying to select the maximum value from a varchar column (in MySQL.) The data stored in the column looks like: 9001 9002 9017 9624 10104 11823 (In reality, the numbers are much bigger than this.) This worked until the numbers incremented above 10000: Feedback.objects.filter(est__pk=est_id).aggregate(sid=Max('sid')) Now, that same line would return 9624 instead of 11823. I'm able to run a query directly in the DB that gives me what I need, but I can't figure out the best way to do this in Django. The query would be: select max(sid+0) from Feedback; Any help would be much appreciated. Thanks!

    Read the article

  • MYSQL stored function - create function (function definition) problem using FORMAT

    - by Jason Fonseca
    Hi all, I keep receiving an error with the following code. I am trying to make a function that will format a field (content=0.0032) into a varchar/percent (content=0.32%). At the moment i'm just trying to get format to work, and it throws up an error "Error Code : 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'len);" The function definition for "Format" is "Format(X,d)" where x is the number and d is the number of decimal places to round too. It then should output a string ###,###,###.## etc. My code is as follows: DROP FUNCTION IF EXISTS percent; DELIMITER $$ CREATE /*[DEFINER = { user | CURRENT_USER }]*/ FUNCTION `auau7859_aba`.`percent`(num DOUBLE, len INT) RETURNS VARCHAR(10) DETERMINISTIC BEGIN RETURN FORMAT(num,len); END$$ DELIMITER ; Save me...Luke

    Read the article

  • Synchronize model in MySQL Workbench

    - by Álvaro G. Vicario
    After reading the documentation for MySQL Workbench I got the impression that it's possible to alter a database in the server (e.g. add a new column) and later incorporate the DDL changes into your EER diagram. At least, it has a Synchronize Model option in the Database menu. I found it a nice feature because I could use a graphic modelling tool without becoming its prisoner. In practice, when I run such tool I'm offered these options: Model Update Source ================ ====== ====== my_database_name --> ! N/A my_table_name --> ! N/A N/A --> ! my_database_name N/A --> ! my_table_name I can't really understand it, but leaving it as is I basically get: DROP SCHEMA my_database_name CREATE SCHEMA my_database_name CREATE TABLE my_table_name This is dump of the model that overwrites all remote changes in my_table_name. Am I misunderstanding the feature?

    Read the article

  • Get signal names from numbers in Python

    - by Brian M. Hunt
    Is there a way to map a signal number (e.g. signal.SIGINT) to its respective name (i.e. "SIGINT")? I'd like to be able to print the name of a signal in the log when I receive it, however I cannot find a map from signal numbers to names in Python, i.e. import signal def signal_handler(signum, frame): logging.debug("Received signal (%s)" % sig_names[signum]) signal.signal(signal.SIGINT, signal_handler) For some dictionary sig_names, so when the process receives SIGINT it prints: Received signal (SIGINT) Thank you.

    Read the article

  • Python: Implementing slicing in __getitem__

    - by nicotine
    I am trying to implement slice functionality for a class I am making that creates a vector representation. I have this code so far, which I believe will properly implement the slice but whenever I do a call like v[4] where v is a vector python returns an error about not having enough parameters. So I am trying to figure out how to define the getitem class to handle both plain indexes and slicing. def __getitem__(self, start, stop, step): indx = start if stop == None: end = start + 1 else: end = stop if step == None: stride = 1 else: stride = step return self.__data[indx:end:stride]

    Read the article

  • SyntaxError using gdata-python-client to access Google Book Search Data API

    - by isbadawi
    >>> import gdata.books.service >>> service = gdata.books.service.BookService() >>> results = service.search_by_keyword(isbn='0434003484') Traceback (most recent call last): File "<pyshell#4>", line 1, in <module> results = service.search_by_keyword(isbn='0434003484') ... snip ... File "C:\Python26\lib\site-packages\atom\__init__.py", line 127, in CreateClassFromXMLString tree = ElementTree.fromstring(xml_string) File "<string>", line 85, in XML SyntaxError: syntax error: line 1, column 0 This is a minimal example -- in particular, the book service unit tests included in the package also fail with the exact same error. I've looked at the wiki and open issue tickets on Google Code to no avail (and this seems to me more apt to be a silly error on my end rather than a problem with the library). I'm not sure how to interpret the error message. If it matters, I'm using python 2.6.5.

    Read the article

  • Python + MySQLdb executemany

    - by lhahne
    I'm using Python and its MySQLdb module to import some measurement data into a Mysql database. The amount of data that we have is quite high (currently about ~250 MB of csv files and plenty of more to come). Currently I use cursor.execute(...) to import some metadata. This isn't problematic as there are only a few entries for these. The problem is that when I try to use cursor.executemany() to import larger quantities of the actual measurement data, MySQLdb raises a TypeError: not all arguments converted during string formatting My current code is def __insert_values(self, values): cursor = self.connection.cursor() cursor.executemany(""" insert into values (ensg, value, sampleid) values (%s, %s, %s)""", values) cursor.close() where values is a list of tuples containing three strings each. Any ideas what could be wrong with this? Edit: The values are generated by yield (prefix + row['id'], row['value'], sample_id) and then read into a list one thousand at a time where row is and iterator coming from csv.DictReader.

    Read the article

  • Filtering MySQL query result according to a interval of timestamp

    - by celalo
    Let's say I have a very large MySQL table with a timestamp field. So I want to filter out some of the results not to have too many rows because I am going to print them. Let's say the timestamps are increasing as the number of rows increase and they are like every one minute on average. (Does not necessarily to be exactly once every minute, ex: 2010-06-07 03:55:14, 2010-06-07 03:56:23, 2010-06-07 03:57:01, 2010-06-07 03:57:51, 2010-06-07 03:59:21 ...) As I mentioned earlier I want to filter out some of the records, I do not have specific rule to do that, but I was thinking to filter out the rows according to the timestamp interval. After I achieve filtering I want to have a result set which has a certain amount of minutes between timestamps on average (ex: 2010-06-07 03:20:14, 2010-06-07 03:29:23, 2010-06-07 03:38:01, 2010-06-07 03:49:51, 2010-06-07 03:59:21 ...) Last but not least, the operation should not take incredible amount of time, I need this functionality to be almost fast as a normal select operation. Do you have any suggestions?

    Read the article

  • Python: Slicing a list into n nearly-equal-length partitions

    - by Drew
    I'm looking for a fast, clean, pythonic way to divide a list into exactly n nearly-equal partitions. partition([1,2,3,4,5],5)->[[1],[2],[3],[4],[5]] partition([1,2,3,4,5],2)->[[1,2],[3,4,5]] (or [[1,2,3],[4,5]]) partition([1,2,3,4,5],3)->[[1,2],[3,4],[5]] (there are other ways to slice this one too) There are several answers in here http://stackoverflow.com/questions/1335392/iteration-over-list-slices that run very close to what I want, except they are focused on the size of the list, and I care about the number of the lists (some of them also pad with None). These are trivially converted, obviously, but I'm looking for a best practice. Similarly, people have pointed out great solutions here http://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks-in-python for a very similar problem, but I'm more interested in the number of partitions than the specific size, as long as it's within 1. Again, this is trivially convertible, but I'm looking for a best practice.

    Read the article

  • python: variable not getting defined after several conditionals

    - by Protean
    For some reason this program is saying that 'switch' is not defined. What is going on? #PYTHON 3.1.1 class mysrt: def __init__(self): self.DATA = open('ORDER.txt', 'r') self.collect = 0 cache1 = str(self.DATA.readlines()) cache2 = [] for i in range(len(cache1)): if cache1[i] == '*': if self.collect == 0: self.collect = 1 elif self.collect == 1: self.collect = 0 elif self.collect == 1: cache2.append(cache1[i]) self.ORDER = cache2 self.ARRAY = [] self.GLOBALi = 0 self.GLOBALmax = range(len(self.ORDER)) self.GLOBALc = [] self.GLOBALl = [] def sorter(self, array): CACHE_LIST_1 = [] CACHE_LIST_2 = [] i = 0 for ORDERi in range(len(self.ORDER)): for ARRAYi in range(len(array)): CACHE = array[ARRAYi] if CACHE[self.GLOBALi] == self.ORDER[ORDERi]: CACHE_LIST_1.append(CACHE) else: CACHE_LIST_2.append(CACHE) for i in range(len(CACHE_LIST_1)): if CACHE_LIST_1[0] == CACHE_LIST_1[i] or range(len(CACHE_LIST_1)) == 1: switch = 1 print ('1') else: switch = 0 print ('0') break if switch == 1: self.GLOBALl += CACHE_LIST_1 + self.GLOBALc self.GLOBALi = 0 self.GLOBALc = [] else: self.GLOBALi += 1 self.GLOBALc += CACHE_LIST_2 mysrt.sorter(CACHE) return (self.GLOBALl) #GLOBALi =0 # if range(len(self.GLOBALc)) =! range(len(self.ARRAY)) array = ['ape', 'cow','dog','bat'] ORDER_FILE = [] mysort = mysrt() print (mysort.sorter(array))

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >