Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 374/838 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Query results taking too long on 200K database, speed up tips?

    - by colorfulgrayscale
    I have a sql statement where I'm joining about 4 tables, each with 200K rows. The query runs, but keeps freezing. When I do a join on 3 tables instead, it returns the rows (takes about 10secs). Any suggestion why? suggestions to speed up? Thanks! Code SELECT * FROM equipment, tiremap, workreference, tirework WHERE equipment.tiremap = tiremap.`TireID` AND tiremap.`WorkMap` = workreference.`aMap` AND workreference.`bMap` = tirework.workmap LIMIT 5 p.s and if it helps any, I'm using sql alchemy to generate this code, the sqlalchemy code for this is query = session.query(equipment, tiremap, workreference, tirework) query = query.filter(equipment.c.tiremap == tiremap.c.TireID) query = query.filter(tiremap.c.WorkMap==workreference.c.aMap) query = query.filter(workreference.c.bMap == tirework.c.workmap) query = query.limit(5) query.all()

    Read the article

  • database datatype size

    - by yeeen
    Just to clarify by specifying sth like VARCHAR(45) means it can take up to max 45 characters? I rmb I heard from someone a few years ago that the number in the parathesis doesn't refer to the no of characters, then the person tried to explain to me sth quite complicated which i don't understand n forgot alr. And what is the difference btn CHAR and VARCHAR? I did search ard a bit and see that CHAR gives u the max of the size of the column and it is better to use it if ur data has a fix size and use VARCHAR if ur data size varies. But if it gives u the max of the size of the column of all the data of this col, isn't it better to use it when ur data size varies? Esp if u don't know how big is ur data size gg to be. VARCHAR needs to specify the size (CHAR don't really need right?), isn't it more troublesome?

    Read the article

  • Update n number of records using JPA with Optimistic locking

    - by Jacques René Mesrine
    I have a table with a column called "count" and I want to run a cron job that will trigger a statement that performs a SQL like this: update summaryTable set count=4; Note that there might be concurrent threads reading & changing the value for "count" when this cron job is triggered. The table has a version column to support Optimistic Locking. What's an efficient way to update the count in JPA ?

    Read the article

  • Database Schema for survey polling application with a default choice.

    - by user156814
    I have a survey application, where users can create surveys and give choices for every survey. Other users can choose their answers for the aurvey and then polls are taken to get the results of the survey. I already have the database schema for this Questions id, user_id, category_id, question_text, date_started Answers id, user_id, question_id, choice_id, explanation, date_added Choices id, question_id, choice_text As for now, users can choose their own choice answers to their surveys... but I want to be able to add a default "I dont care" or "I dont know" choice to every survey for people who simply dont care about the topic to take sides or who cant choose. So lets say theres a survey that asks who was a better president, George W. Bush, Bill Clinton, Ronald Reagon, or Richard Nixon... I want to be able to add a default "I dont care" option. I was thinking to just add that extra choice EVERY TIME a user creates a survey, but then I wouldn't have much control over the text for that choice after that survey has been created, and I want to know if theres a better way to do this, like create another table or something Thanks

    Read the article

  • database splitting; multiple tables

    - by Ben
    I am coding a classifieds ad web app. What is the optimal way to structure the database for this? Because of the high repeatability, would it be faster (in terms of searching/indexing) to have a separate table in the database for each city? Or would it be okay to just have one table for every city (it would have a lot of rows..). The classifieds table has id, user_id, city_name, category,[description and detail fields].

    Read the article

  • SQL Join to only the maximum row puzzle

    - by Billy ONeal
    Given the following example data: Users +--------------------------------------------------+ | ID | First Name | Last Name | Network Identifier | +--------------------------------------------------+ | 1 | Billy | O'Neal | bro4 | +----+------------+-----------+--------------------+ | 2 | John | Skeet | jsk1 | +----+------------+-----------+--------------------+ Hardware +----+-------------------+---------------+ | ID | Hardware Name | Serial Number | +----+-------------------+---------------+ | 1 | Latitude E6500 | 5555555 | +----+-------------------+---------------+ | 2 | Latitude E6200 | 2222222 | +----+-------------------+---------------+ HardwareAssignments +---------+-------------+-------------+ | User ID | Hardware ID | Assigned On | +---------+-------------+-------------+ | 1 | 1 | April 1 | +---------+-------------+-------------+ | 1 | 2 | April 10 | +---------+-------------+-------------+ | 2 | 2 | April 1 | +---------+-------------+-------------+ | 2 | 1 | April 11 | +---------+-------------+-------------+ I'd like to write a SQL query which would give the following result: +--------------------+------------+-----------+----------------+---------------+-------------+ | Network Identifier | First Name | Last Name | Hardware Name | Serial Number | Assigned On | +--------------------+------------+-----------+----------------+---------------+-------------+ | bro4 | Billy | O'Neal | Latitude E6200 | 2222222 | April 10 | +--------------------+------------+-----------+----------------+---------------+-------------+ | jsk1 | John | Skeet | Latitude E6500 | 5555555 | April 11 | +--------------------+------------+-----------+----------------+---------------+-------------+ My trouble is that the maximum "Assigned On" date for each user needs to be selected for each individual user and used for the actual join ... Is there a clever way accomplish this in SQL?

    Read the article

  • Suggest me solution to track the change in test DB and replicate in Another DB

    - by Parth
    Suggest me solution to track the change in test DB and replicate in Another DB... My Client need a script or any solution, if he has two Database, One Test DB in which he tests his data on test portal and if he find it appropriate he can use those changes to be done in main DB to display on Live site.. Fior this he needs the solution to record or track all updation/deletion/insertion, so that he can do the same in main DB if found appropriate, ** NOTE: ** we hav only on server, no separate server, hence binary log replication doesnt seems to be working for my case...

    Read the article

  • SQL: How to select rows from a table while ignoring the duplicate field values?

    - by Maxxon
    How to select rows from a table while ignoring the duplicate field values? Here is an example: id user_id message 1 Adam "Adam is here." 2 Peter "Hi there this is Peter." 3 Peter "I am getting sick." 4 Josh "Oh, snap. I'm on a boat!" 5 Tom "This show is great." 6 Laura "Textmate rocks." What i want to achive is to select the recently active users from my db. Let's say i want to select the 5 recently active users. The problem is, that the following script selects Peter twice. mysql_query("SELECT * FROM messages ORDER BY id DESC LIMIT 5 "); What i want is to skip the row when it gets again to Peter, and select the next result, in our case Adam. So i don't want to show my visitors that the recently active users were Laura, Tom, Josh, Peter, and Peter again. That does not make any sense, instead i want to show them this way: Laura, Tom, Josh, Peter, (skipping Peter) and Adam. Is there an SQL command i can use for this problem?

    Read the article

  • How to set up an insert to a grails created file with next sequence number?

    - by Jack BeNimble
    I'm using a JMS queue to read from and insert data into a postgres table created by grails. The problem is obtaining the next sequence value. I thought I had found the solution with the following statement (by putting "DEFAULT" where the ID should go), but it's no longer working. I must have changed something, because I needed to recreate the table. What's the best way to get around this problem? ps = c.prepareStatement("INSERT INTO xml_test (id, version, xml_text) VALUES (DEFAULT, 0, ?)"); UPDATE: In response to the suggested solution, I did the following: Added this to the the domain: class XmlTest { String xmlText static constraints = { id generator:'sequence', params:[name:'xmltest_sequence'] } } And changed the insert statement to the following: ps = c.prepareStatement("INSERT INTO xml_test (id, version, xml_text) VALUES (nextval('xmltest_sequence'), 0, ?)"); However, when I run the statement, I get the following error: [java] 1 org.postgresql.util.PSQLException: ERROR: relation "xmltest_sequence" does not exist Any thoughts?

    Read the article

  • Query for props list with or without values

    - by vitto
    Hi, I'm trying to make a SELECT on three relational tables like these ones: table_materials -> material_id - material_name table_props -> prop_id - prop_name table_materials_props - row_id -> material_id -> prop_id - prop_value On my page, I'd like to get a result like this one but i have some problem with the query: material prop A prop B prop C prop D prop E wood 350 NULL NULL 84 16 iron NULL 17 NULL NULL 201 copper 548 285 99 NULL NULL so the query should return something like: material prop_name prop_value wood prop A 350 wood prop B NULL wood prop C NULL wood prop D 84 wood prop E 16 // and go on with others rows i thought to use something like: SELECT * FROM table_materials AS m INNER JOIN table_materials_props AS mp ON m.material_id = mp.material_id INNER JOIN table_materials_props AS p ON mp.prop_id = p.prop_id ORDER BY p.prop_name the problem is the query doesn't return the NULL values, and I need the same prop order for all the materials regardless of prop values are NULL or not I hope this example is clear!

    Read the article

  • Update/Increment a single column on multiple rows at once

    - by Jordan Feldstein
    I'm trying to add rows to a column, keeping the order of the newest column set to one, and all other rows counting up from there. In this case, I add a new row with order=0, then use this query to update all the rows by one. "UPDATE favorits SET order = order+1" However, what happens is that all the rows are updated to the same value. I get a stack of favorites, all with order 6 for example, when it should be one with 1, the next with 2 and so on. How do I update these rows in a way that orders them the way they should be? Thanks, ~Jordan

    Read the article

  • optimize query: get al votes from user's item

    - by Toni Michel Caubet
    hi there! i did it my way because i'm very bad getting results from two tables... Basically, first i get all the id items that correspond to the user, and then i calculate the ratings of each item. But, there is two different types of object item, so i do this 2 times: show you: function votos_usuario($id){ $previa = "SELECT id FROM preguntas WHERE id_usuario = '$id'"; $r_previo = mysql_query($previa); $ids_p = '0, '; while($items_previos = mysql_fetch_array($r_previo)){ $ids_p .= $items_previos['id'].", "; //echo "ids pregunta usuario: ".$items_previos['id']."<br>"; } $ids = substr($ids_p,0,-2); //echo $ids; $consulta = "SELECT valor FROM votos_pregunta WHERE id_pregunta IN ( $ids )"; //echo $consulta; $resultado = mysql_query($consulta); $votos_preguntas = 0; while($voto = mysql_fetch_array($resultado)){ $votos_preguntas = $votos_preguntas + $voto['valor']; } $previa_r = "SELECT id FROM recetas WHERE id_usuario = '$id'"; $r_previo_r = mysql_query($previa_r); $ids_r = '0, '; while($items_previos_r = mysql_fetch_array($r_previo_r)){ $ids_r .= $items_previos_r['id'].", "; //echo "ids pregunta usuario: ".$items_previos['id']."<br>"; } $ids = substr($ids_r,0,-2); $consulta_b = "SELECT valor FROM votos_receta WHERE id_receta IN ( $ids )"; //echo $consulta; $resultado_b = mysql_query($consulta_b); $votos_recetas = 0; while($voto_r = mysql_fetch_array($resultado_b)){ $votos_recetas = $votos_recetas + $voto_r['valor']; } $total = $votos_preguntas + $votos_recetas; return $total; } As you can si this is two much.. O(n^2) Feel like thinking? thanks!

    Read the article

  • SQL: Daily Average of Logins Per Hour

    - by jerrygarciuh
    This query is producing counts of logins per hour: SELECT DATEADD(hour, DATEDIFF(hour, 0, EVENT_DATETIME), 0), COUNT(*) FROM EVENTS_ALL_RPT_V1 WHERE EVENT_NAME = 'Login' AND EVENT_DATETIME >= CONVERT(DATETIME, '2010-03-17 00:00:00', 120) AND EVENT_DATETIME <= CONVERT(DATETIME, '2010-03-24 00:00:00', 120) GROUP BY DATEADD(hour, DATEDIFF(hour, 0, EVENT_DATETIME), 0) ORDER BY DATEADD(hour, DATEDIFF(hour, 0, EVENT_DATETIME), 0) ...with lots of results like this: Datetime COUNT(*) ---------------------------------- 2010-03-17 12:00:00.000 135 2010-03-17 13:00:00.000 129 2010-03-17 14:00:00.000 147 What I need to figure out is how to query the average logins per hour for a given day. Any help?

    Read the article

  • Database - Designing an "Events" Table

    - by Alix Axel
    After reading the tips from this great Nettuts+ article I've come up with a table schema that would separate highly volatile data from other tables subjected to heavy reads and at the same time lower the number of tables needed in the whole database schema, however I'm not sure if this is a good idea since it doesn't follow the rules of normalization and I would like to hear your advice, here is the general idea: I've four types of users modeled in a Class Table Inheritance structure, in the main "user" table I store data common to all the users (id, username, password, several flags, ...) along with some TIMESTAMP fields (date_created, date_updated, date_activated, date_lastLogin, ...). To quote the tip #16 from the Nettuts+ article mentioned above: Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum. Now it gets even trickier, I need to keep track of some user statistics like how many unique times a user profile was seen how many unique times a ad from a specific type of user was clicked how many unique times a post from a specific type of user was seen and so on... In my fully normalized database this adds up to about 8 to 10 additional tables, it's not a lot but I would like to keep things simple if I could, so I've come up with the following "events" table: |------|----------------|----------------|--------------|-----------| | ID | TABLE | EVENT | DATE | IP | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190030 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | created | 201004190031 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | activated | 201004190234 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | approved | 201004190930 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | login | 201004191200 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | created | 201004191230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | impressed | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | blocked | 201004200319 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | deleted | 201004200320 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| Basically the ID refers to the primary key (id) field in the TABLE table, I believe the rest should be pretty straightforward. One thing that I've come to like in this design is that I can keep track of all the user logins instead of just the last one, and thus generate some interesting metrics with that data. Due to the nature of the events table I also thought of making some optimizations, such as: #9: Since there is only a finite number of tables and a finite (and predetermined) number of events, the TABLE and EVENTS columns could be setup as ENUMs instead of VARCHARs to save some space. #14: Store IPs as UNSIGNED INT with INET_ATON() instead of VARCHARs. Store DATEs as TIMESTAMPs instead of DATETIMEs. Use the ARCHIVE (or the CSV?) engine instead of InnoDB / MyISAM. Overall, each event would only consume 14 bytes which is okay for my traffic I guess. Pros: Ability to store more detailed data (such as logins). No need to design (and code for) almost a dozen additional tables (dates and statistics). Reduces a few columns per table and keeps volatile data separated. Cons: Non-relational (still not as bad as EAV): SELECT * FROM events WHERE id = 2 AND table = 'user' ORDER BY date DESC(); 6 bytes overhead per event (ID, TABLE and EVENT). I'm more inclined to go with this approach since the pros seem to far outweigh the cons, but I'm still a little bit reluctant.. Am I missing something? What are your thoughts on this? Thanks!

    Read the article

  • OT: US Banks: Bank Routing Number and BIC/SWIFT

    - by Konerak
    I know it is a bit offtopic, but I've been having a hard time finding more information to this question, and since this site is visited by a lot of people from the United States, you guys might know/find the answer more easily. Banks in europe each have a SWIFT Number, while US Banks use Routing Numbers. This leads to following questions: Does each bank in the US also carry a BIC number? (SWIFT) Is there a 1-1 relationship between BIC and SWIFT? Is there a list of these numbers somewhere? (background information: we're adding international payments to our bookkeeping application. Users can add international suppliers, but my boss prefered not to change the current supplier table but to have the ROUTING NUMBER in another table, with as PK the BIC. I'm wondering if BIC is a valid choice, or if it should just be BANK ACCOUNT NUMBER.)

    Read the article

  • How to evaluate query by DBMS?

    - by Kevinniceguy
    Sorry...I mean what question will be for this query? SELECT SUM(price) FROM Room r, Hotel h WHERE r.hotelNo = h.hotelNo and hotelName = 'Paris Hilton' and roomNo NOT IN (SELECT roomNo FROM Booking b, Hotel h WHERE (dateFrom <= CURRENT_DATE AND dateTo >= CURRENT_DATE) AND b.hotelNo = h.hotelNo AND hotelName = 'Paris Hilton');

    Read the article

  • Best practice for storing global data in PHP?

    - by user281434
    Hi I'm running a web application that allows a user to log in. The user can add/remove content to his/her 'library' which is displayed on a page called "library.php". Instead of querying the database for the contents of the users library everytime they load "library.php", I want to store it globally for PHP when the user logs in, so that the query is only run once. Is there a best practice for doing this? fx. storing their library in an array in a session? Thanks for your time

    Read the article

  • optimizing an sql query using inner join and order by

    - by Sergio B
    I'm trying to optimize the following query without success. Any idea where it could be indexed to prevent the temporary table and the filesort? EXPLAIN SELECT SQL_NO_CACHE `groups`.* FROM `groups` INNER JOIN `memberships` ON `groups`.id = `memberships`.group_id WHERE ((`memberships`.user_id = 1) AND (`memberships`.`status_code` = 1 AND `memberships`.`manager` = 0)) ORDER BY groups.created_at DESC LIMIT 5;` +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | memberships | ref | grp_usr,grp,usr,grp_mngr | usr | 5 | const | 5 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | groups | eq_ref | PRIMARY | PRIMARY | 4 | sportspool_development.memberships.group_id | 1 | | +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ 2 rows in set (0.00 sec) +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ | groups | 0 | PRIMARY | 1 | id | A | 6 | NULL | NULL | | BTREE | | | groups | 1 | index_groups_on_name | 1 | name | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_privacy_setting | 1 | privacy_setting | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_created_at | 1 | created_at | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_id_and_created_at | 1 | id | A | 6 | NULL | NULL | | BTREE | | | groups | 1 | index_groups_on_id_and_created_at | 2 | created_at | A | 6 | NULL | NULL | YES | BTREE | | +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | memberships | 0 | PRIMARY | 1 | id | A | 2 | NULL | NULL | | BTREE | | | memberships | 0 | grp_usr | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 0 | grp_usr | 2 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | usr | 1 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp_mngr | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp_mngr | 2 | manager | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 2 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 3 | status_code | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 4 | manager | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 1 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 2 | status_code | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 3 | manager | A | 2 | NULL | NULL | YES | BTREE | | +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+

    Read the article

  • DataMapper is only returning the last part of this query

    - by Josh K
    So I'm using the following: $r = new Record(); $r->select('ip, count(*) as ipcount'); $r->group_by('ip'); $r->order_by('ipcount', 'desc'); $r->limit(5); $r->get(); foreach($r->all as $record) { echo($record->ip." "); echo($record->ipcount." <br />"); } And I only get the last (fifth) record echo'ed out and no ipcount echoed. Is there a different way to go around this? I'm working on learning DataMapper (hence the questions) and need to figure some of this out. I haven't quite wrapped my head around the whole ORM thing.

    Read the article

  • Retrieve object from jquery in php script

    - by majc
    I'm trying to rebuild an object encoded with Json but i'm not getting any value. JQuery: $.post("views/insert_tasks.php",{ clickedRows : clickrows , <?php echo "tasks:'" . json_encode($tasks) . "'"; ?> }, function(data) { }); this is the PHPcode to retrieve the object: $tasks = json_decode(stripslashes($_POST['tasks']), true); $tasks is empty after execute the code above. This is what I'm getting with the $_POST['tasks']: [{"task_id":"1","description":"<p>Fazer heroi</p>","createdat":"Saturday 22nd of May 2010 11:37:37 PM","createdby":"Miguel Cardoso","max_requests":"2","max_duration":"5","job_id":"Concept Artist"},{"task_id":"2","description":"<p>teste2</p>","createdat":"Sunday 23rd of May 2010 11:23:55 AM","createdby":"Miguel Cardoso","max_requests":"2","max_duration":"5","job_id":"3D Modeller"},{"task_id":"3","description":"<p>teste3</p>","createdat":"Sunday 23rd of May 2010 11:45:39 AM","createdby":"Miguel Cardoso","max_requests":"1","max_duration":"10","job_id":"Writer"}] What I'm doing wrong?

    Read the article

  • what is called KEY

    - by Bharanikumar
    CREATE TABLE `ost_staff` ( `staff_id` int(11) unsigned NOT NULL auto_increment, `group_id` int(10) unsigned NOT NULL default '0', `dept_id` int(10) unsigned NOT NULL default '0', `username` varchar(32) collate latin1_german2_ci NOT NULL default '', `firstname` varchar(32) collate latin1_german2_ci default NULL, `lastname` varchar(32) collate latin1_german2_ci default NULL, `passwd` varchar(128) collate latin1_german2_ci default NULL, `email` varchar(128) collate latin1_german2_ci default NULL, `phone` varchar(24) collate latin1_german2_ci NOT NULL default '', `phone_ext` varchar(6) collate latin1_german2_ci default NULL, `mobile` varchar(24) collate latin1_german2_ci NOT NULL default '', `signature` varchar(255) collate latin1_german2_ci NOT NULL default '', `isactive` tinyint(1) NOT NULL default '1', `isadmin` tinyint(1) NOT NULL default '0', `isvisible` tinyint(1) unsigned NOT NULL default '1', `onvacation` tinyint(1) unsigned NOT NULL default '0', `daylight_saving` tinyint(1) unsigned NOT NULL default '0', `append_signature` tinyint(1) unsigned NOT NULL default '0', `change_passwd` tinyint(1) unsigned NOT NULL default '0', `timezone_offset` float(3,1) NOT NULL default '0.0', `max_page_size` int(11) NOT NULL default '0', `created` datetime NOT NULL default '0000-00-00 00:00:00', `lastlogin` datetime default NULL, `updated` datetime NOT NULL default '0000-00-00 00:00:00', PRIMARY KEY (`staff_id`), UNIQUE KEY `username` (`username`), KEY `dept_id` (`dept_id`), **KEY `issuperuser` (`isadmin`),** **KEY `group_id` (`group_id`,`staff_id`)** ) ENGINE=MyISAM AUTO_INCREMENT=35 DEFAULT CHARSET=latin1 COLLATE=latin1_german2_ci; Hi the above query is the osticket open source one, i know primary key , foreign key , unique but AM NOT SURE WHAT IS THIS KEY group_id (group_id,staff_id) Please tell me, this constraints name....

    Read the article

  • php cli script hangs with no messages

    - by julio
    Hi-- I've written a PHP script that runs via SSH and nohup, meant to process records from a database and do stuff with them (eg. process some images, update some rows). It works fine with small loads, up to maybe 10k records. I have some larger datasets that process around 40k records (not a lot, I realize, but it adds up to a lot of work when each record requires the download and processing of up to 50 images). The larger datasets can take days to process. Sometimes I'll see in my debug logs memory errors, which are clear enough-- but sometimes the script just appears to "die" or go zombie on me. My tail of the debug log just stops, with no error messages, the tail of the nohup log ends with no error, and the process is still showing in a ps list, looking like this-- 26075 pts/0 S 745:01 /usr/bin/php ./import.php but no work is getting done. Can anyone give me some ideas on why a process would just quit? The obvious things (like a php script timeout and memory issues) are not a factor, as far as I can tell. Thanks for any tips PS-- this is hosted on a godaddy VDS (not my choice). I am sort of suspecting that godaddy has some kind of limits that might kick in on me despite what overrides I put in the code (such as set_time_limit(0);).

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >