Search Results

Search found 592 results on 24 pages for 'innodb'.

Page 19/24 | < Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Hibernate doesn't generate cascade

    - by Shervin
    Hi. I have a set hibernate.hbm2ddl.auto to create so that Hibernate creates the tables in mysql for me. However, it doesn't seem that hibernate correctly adds Cascade on the references in the table. It does however work when I for instance delete a row, and I have a delete cascade as hibernate annotation. So I guess that means that Hibernate reads the annoation on runtime, and perform cascading manually? Is that normal behavior? For instance: @Entity class Report { @OneToOne(cascade = CascadeType.ALL) public File getPdf() { return pdf; } } Here I have set cascade to ALL. However, when running show create table Report Report | CREATE TABLE `Report` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `pdf_id` bigint(20) DEFAULT NULL, PRIMARY KEY (`id`), KEY `FK91B14154FDE6543A` (`pdf_id`), CONSTRAINT `FK91B14154FDE6543A` FOREIGN KEY (`pdf_id`) REFERENCES `File` (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | It doesn't say anything about cascading other then the foreign key. In my opinion, it should have added the ON DELETE CASCADE ON DELETE UPDATE

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • How do i serlialize the product using php?

    - by Ibrahim Azhar Armar
    hi, i am building a real estate application where in it will store the properties and search it. the property will have different categories like (residential, commercial, industrial or agricultural). based upon the category i want to serailize each and every property listing . for example the property with id 1 belongs to resedential will have the serial code rs_SOMERANDOMUNIQUENUMBER. and for commercial it can be cm_SOMERANDOMUNIQUENUMBER and so on. for this my database table looks like this. CREATE TABLE IF NOT EXISTS `propSerials` ( `id` bigint(20) NOT NULL auto_increment, `serial` varchar(50) NOT NULL, `property_id` int(10) UNIQUE NOT NULL, PRIMARY KEY  (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; what would be the best possible format to store the serial with the prefix according to category? thank you

    Read the article

  • Inserting records in Mysql with INSERT IGNORE and NULL values

    - by Homer1980ar
    I have a partitioned table InnoDB with several fields. I'm trying to avoid duplicates on insert. Let's say: Field1 int null Field2 int null Field3 int null Field4 int null Field5 int null I have created a UNIQUE index on those fields. I try to insert some records with NULL values and then try to reinsert them with IGNORE feature on MySql. Unfortunately it seems to replicated the records when using NULL values. If I try with zeros instead of NULL cases everything works, but I do need the nulls there. Any idea? Thanks, Leonardo

    Read the article

  • Working with foreign keys - cannot insert

    - by Industrial
    Hi everyone! Doing my first tryouts with foreign keys in a mySQL database and are trying to do a insert, that fails for this reason: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails Does this mean that foreign keys restrict INSERTS as well as DELETES and/or UPDATES on each table that is enforced with foreign keys relations? Thanks! Updated description: Products ---------------------------- id | type ---------------------------- 0 | 0 1 | 3 ProductsToCategories ---------------------------- productid | categoryid ---------------------------- 0 | 0 1 | 1 Product table has following structure CREATE TABLE IF NOT EXISTS `alpha`.`products` ( `id` MEDIUMINT UNSIGNED NOT NULL AUTO_INCREMENT , `type` TINYINT(2) UNSIGNED NOT NULL DEFAULT 0 , PRIMARY KEY (`id`) , CONSTRAINT `prodsku` FOREIGN KEY (`id` ) REFERENCES `alpha`.`productsToSku` (`product` ) ON DELETE CASCADE, ON UPDATE CASCADE) ENGINE = InnoDB;

    Read the article

  • MySQL Inserting into locked aliased table

    - by Whitey
    I am trying to insert data into a InnoDB MySQL table which is locked using an alias and I cannot for the life of me get it to work! The following works: LOCK TABLES Problems p1 WRITE, Problems p2 WRITE, Server READ; SELECT * FROM Problems p1; UNLOCK TABLES; But try and do an insert and it doesn't work (it claims there is a syntax error round the 'p1' in my INSERT): LOCK TABLES Problems p1 WRITE, Problems p2 WRITE, Server READ; INSERT INTO Problems p1 (SomeCol) VALUES(43534); UNLOCK TABLES; Help please!

    Read the article

  • Optimizing MySQL statement with lot of count(row) an sum(row+row2)...

    - by Zombies
    I need to use InnoDB storage engine on a table with about 1mil or so records in it at any given time. It has records being inserted to it at a very fast rate, which are then dropped within a few days, maybe a week. The ping table has about a million rows, whereas the website table only about 10,000. My statement is this: select url from website ws, ping pi where ws.idproxy = pi.idproxy and pi.entrytime > curdate() - 3 and contentping+tcpping is not null group by url having sum(contentping+tcpping)/(count(*)-count(errortype)) < 500 and count(*) > 3 and count(errortype)/count(*) < .15 order by sum(contentping+tcpping)/(count(*)-count(errortype)) asc; I added an index on entrytime, yet no dice. Can anyone throw me a bone as to what I should consider to look into for basic optimization of this query. The result set is only like 200 rows, so I'm not getting killed there.

    Read the article

  • One to One relationship in MySQL

    - by Botto
    I'm trying to make a one to one relationship in a MySQL DB. I'm using the InnoDB engine and the basic table looks like this: CREATE TABLE `foo` ( `fooID` INT(11) NOT NULL PRIMARY KEY AUTO_INCREMENT, `name` TEXT NOT NULL ) CREATE TABLE `bar` ( `barName` VARCHAR(100) NOT NULL, `fooID` INT(11) NOT NULL PRIMARY KEY, CONSTRAINT `contact` FOREIGN KEY (`fooID`) REFERENCES `foo`(`fooID`) ) Now once I have set up these I alter the foo table so that the fooID also becomes a foreign key to the fooID in bar. The only issue I am facing with this is that there will be a integrity issue when I try to insert into either. I would like some help, thanks.

    Read the article

  • rails migration. modify starting point for auto_increment

    - by railsnew
    I have a table already created. I am looking for a rails migration where I can modify the starting point of the auto_increment number for id column of my table. Let's say I want it to start from 1000. I googled a bit and came across this: it says: :options "string" pass raw options to your underlying database, e.g. auto_increment = 10000. Note that passing options will cause you to lose the default ENGINE=InnoDB statement Can this be used for something I want? and how will the migration look since i am changing the column and not creating new one...

    Read the article

  • How to handle large table in MySQL ?

    - by Frantz Miccoli
    I've a database used to store items and properties about these items. The number of properties is extensible, thus there is a join table to store each property associated to an item value. CREATE TABLE `item_property` ( `property_id` int(11) NOT NULL, `item_id` int(11) NOT NULL, `value` double NOT NULL, PRIMARY KEY (`property_id`,`item_id`), KEY `item_id` (`item_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; This database has two goals : storing (which has first priority and has to be very quick, I would like to perform many inserts (hundreds) in few seconds), retrieving data (selects using item_id and property_id) (this is a second priority, it can be slower but not too much because this would ruin my usage of the DB). Currently this table hosts 1.6 billions entries and a simple count can take up to 2 minutes... Inserting isn't fast enough to be usable. I'm using Zend_Db to access my data and would really be happy if you don't suggest me to develop any php side part. Thanks for your advices !

    Read the article

  • Fixing "Lock wait timeout exceeded; try restarting transaction" for a 'stuck" Mysql table?

    - by Tom
    From a script I sent a query like this thousands of times to my local database: update some_table set some_column = some_value I forgot to add the where part, so the same column was set to the same a value for all the rows in the table and this was done thousands of times and the column was indexed, so the corresponding index was probably updated too lots of times. I noticed something was wrong, because it took too long, so I killed the script. I even rebooted my computer since them, but something stuck in the table, because simple queries take a very long time to run and when I try dropping the relevant index it fails with this message: Lock wait timeout exceeded; try restarting transaction It's an innodb table, so stuck the transaction is probably implicit. How can I fix this table and remove the stuck transaction from it?

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • Why MySQL multiple-column index is overpopulated?

    - by actual
    Consider following MySQL table: CREATE TABLE `log` ( `what` enum('add', 'edit', 'remove') CHARACTER SET ascii COLLATE ascii_bin NOT NULL, `with` int(10) unsigned NOT NULL, KEY `with_what` (`with`,`what`) ) ENGINE=InnoDB; INSERT INTO `log` (`what`, `with`) VALUES ('add', 1), ('edit', 1), ('add', 2), ('remove', 2); As I understand, with_what index must have 2 unique entries on its first with level and 3 unique entries in what "subindex". But MySQL reports 4 unique entries for each level. In other words, number of unique elements for each level is always equal to number of rows in log table. Is that a bug, a feature or my misunderstanding?

    Read the article

  • pitfalls with mixing storage engines in mysql with django?

    - by Dave Orr
    I'm running a django system over mysql in amazon's cloud, and the database default is innodb. But now I want to put a fulltext index on a couple of tables for searching, which evidently requires myisam. The obvious solution is to just tell mysql to ALTER TABLE to myisam, but are there going to be any issues with that? One that comes to mind is that I'll have to remember to do that any time I build a new version of the database, which should theoretically be rare, but there doesn't seem to be a way to tell django to please set the storage engine at the table level. I guess I could write a migration (we use south). Any other things I might be missing? What could possibly go wrong?

    Read the article

  • Flask Admin didn't show all fields

    - by twoface88
    I have model like this: class User(db.Model): __tablename__ = 'users' __table_args__ = {'mysql_engine' : 'InnoDB', 'mysql_charset' : 'utf8'} id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80), unique=True) email = db.Column(db.String(120), unique=True) _password = db.Column('password', db.String(80)) def __init__(self, username = None, email = None, password = None): self.username = username self.email = email self._set_password(password) def _set_password(self, password): self._password = generate_password_hash(password) def _get_password(self): return self._password def check_password(self, password): return check_password_hash(self._password, password) password = db.synonym("_password", descriptor=property(_get_password, _set_password)) def __repr__(self): return '<User %r>' % self.username I have ModelView: class UserAdmin(sqlamodel.ModelView): searchable_columns = ('username', 'email') excluded_list_columns = ['password'] list_columns = ('username', 'email') form_columns = ('username', 'email', 'password') But no matter what i do, flask admin didn't show password field when i'm editing user info. Is there any way ? Even just to edit hash code. UPDATE: https://github.com/mrjoes/flask-admin/issues/78

    Read the article

  • Prevent duplicate rows with all non-unique columns (only with MySQL)?

    - by letseatfood
    Ho do I prevent a duplicate row from being created in a table with two columns, neither of which are unique? And can this be done using MySQL only, or does it require checks with my PHP script? Here is the CREATE query for the table in question (two other tables exist, users and roles): CREATE TABLE users_roles ( user_id INT(100) NOT NULL, role_id INT(100) NOT NULL, FOREIGN KEY (user_id) REFERENCES users(user_id), FOREIGN KEY (role_id) REFERENCES roles(role_id) ) ENGINE = INNODB; I would like the following query, if executed more than once, to throw an error: INSERT INTO users_roles (user_id, role_id) VALUES (1, 2); Please do not recommend bitmasks as an answer. Thanks!

    Read the article

  • MySQL command-line tool: How to find out number of rows affected by a DELETE?

    - by ambivalence
    I'm trying to run a script that deletes a bunch of rows in a MySQL (innodb) table in batches, by executing the following in a loop: mysql --user=MyUser --password=MyPassword MyDatabase < SQL_FILE where SQL_FILE contains a DELETE FROM ... LIMIT X command. I need to keep running this loop until there's no more matching rows. But unlike running in the mysql shell, the above command does not return the number of rows affected. I've tried -v and -t but neither works. How can I find out how many rows the batch script affected? Thanks!

    Read the article

  • MySQL Locking table from Stored FUNCTION

    - by Brandon
    I have a function in a MySQL Database that determines some sync parameters for a mobile device. The function determines the last date/time the user synchronized with the database. During my sync operation I call this server side function twice. As soon as I call it the second time - the entire Sync_Records table is locked. I cannot write to it from any other connection anywhere (note, after first call, the table is not locked). I changed the function to a Procedure - and all is fine - no locking after the second call. The entire sync operation (including both calls to the function/procedure) is within a transaction. This is an InnoDb table. The function/procedure simply does two select statements. They are storing results in local variables and then returning the date time variable. I don't understand why the tables are locked. Does anyone have any ideas?

    Read the article

  • Any way to avoid a filesort when order by is different to where clause?

    - by Julian
    I have an incredibly simple query (table type InnoDb) and EXPLAIN says that MySQL must do an extra pass to find out how to retrieve the rows in sorted order. SELECT * FROM `comments` WHERE (commentable_id = 1976) ORDER BY created_at desc LIMIT 0, 5 exact explain output: table select_type type extra possible_keys key key length ref rows comments simple ref using where; using filesort common_lookups common_lookups 5 const 89 commentable_id is indexed. Comments has nothing trick in it, just a content field. The manual suggests that if the order by is different to the where, there is no way filesort can be avoided. http://dev.mysql.com/doc/refman/5.0/en/order-by-optimization.html I also tried order by id as well as it's equivalent but makes no difference, even if I add id as an index (which I understand is not required as id is indexed implicitly in MySQL). thanks in advance for any ideas!

    Read the article

  • How to strore Java Date to Mysql datetime...?

    - by user275843
    can any body tell me how can i store Java Date to Mysql datetime...? when i am trying to do so...only date is stored and time remain 00:00:00 in mysql date stores like this... 2009-09-22 00:00:00 i want not only date but also time...like 2009-09-22 08:08:11 please help me.... EDIT---- i am using JPA(Hibernate) with spring mydomain classes uses java.util.Date but i have created tables using handwritten queries... this is my create statement CREATE TABLE ContactUs (id BIGINT auto_increment, userName VARCHAR(30), email VARCHAR(50), subject VARCHAR(100), message VARCHAR(1024), messageType VARCHAR(15), contactUsTime datetime, primary key(id))TYPE=InnoDB;

    Read the article

  • Doctrine: Mixing YAML markup and db manager (navicat) editing?

    - by ropstah
    I think the answer to this question should be: No. However I hope to be corrected. I'd like to edit our database using a mixture of YAML markup + Doctrine createTables() and Navicat editing. Can I maintain the inheritance which is marked up? Example (4 steps, at step 4, Doctrine is in no way able to re-create the inheritance schema... or is it?): Step 1: Create YAML with inheritance --- Entity: columns: username: string(20) password: string(16) created_at: timestamp updated_at: timestamp User: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 1 Group: inheritance: extends: Entity type: column_aggregation keyField: type keyValue: 2 Step 2: Create tables using Doctrine (and drop/create db if nessecary) Created sql: CREATE TABLE entity (id BIGINT AUTO_INCREMENT, username VARCHAR(20), password VARCHAR(16), created_at DATETIME, updated_at DATETIME, type VARCHAR(255), PRIMARY KEY(id)) ENGINE = INNODB Step 3: Edit table using Navicat Step 4: Refresh YAML file because of 'external' edits...

    Read the article

  • MySQL storage engine dilemma

    - by burntblark
    There are two MySQL database features that I want to use in my application. The first is FULL-TEXT-SEARCH and TRANSACTIONS. Now, the dilemma here is that I cannot get this feature in one storage engine. It's either I use MyIsam (which has the FULL-TEXT-SEARCH feature) or I use InnoDB (which supports the TRANSACTION feature). I can't have both. My question is, is there anyway I can have both features in my application before I am forced to make a choice between the two storage engines.

    Read the article

  • No difference between nullable:true and nullable:false in Grails 1.3.6?

    - by knorv
    The following domain model definition .. class Test { String a String b static mapping = { version(false) table("test_table") a(nullable: false) b(nullable: true) } } .. yields the following MySQL schema .. CREATE TABLE test_table ( id bigint(20) NOT NULL AUTO_INCREMENT, a varchar(255) NOT NULL, b varchar(255) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Please note that a and b get identical MySQL column definitions despite the fact a is defined as non-nullable and b is nullable in the GORM mappings. What am I doing wrong? I'm running Grails 1.3.6.

    Read the article

  • SQL CREATE TABLE Error

    - by Adam M-W
    Hi, I've been stuck on this one simple(ish) thing for the last 1/2 hour so I thought I might try to get a quick answer here. What exactly is incorrect about my SQL syntax, assuming I'm using mysql 5.1 CREATE TABLE 'users' ( 'id' MEDIUMINT(8) UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, 'username' VARCHAR(20) NOT NULL, 'password' VARCHAR(40) NOT NULL, 'salt' VARCHAR(40) DEFAULT NULL, 'email' VARCHAR(80) NOT NULL, 'created_on' INT(11) UNSIGNED NOT NULL, 'last_login' INT(11) UNSIGNED DEFAULT NULL, 'active' TINYINT(1) UNSIGNED DEFAULT NULL, ) ENGINE InnoDB; Also, does anyone have any good tutorials about how to use Zend_Auth for complete noobs? Thanks.

    Read the article

  • Database Optimization techniques for amateurs.

    - by Zombies
    Can we get a list of basic optimization techniques going (anything from modeling to querying, creating indexes, views to query optimization). It would be nice to have a list of these, one technique per answer. As a hobbyist I would find this to be very useful, thanks. And for the sake of not being too vague, let's say we are using a maintstream DB such as MySQL or Oracle, and that the DB will contain 500,000-1m or so records across ~10 tables, some with foreign key contraints, all using the most typical storage engines (eg: InnoDB for MySQL). And of course, the basics such as PKs are defined as well as FK contraints.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >