Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 232/838 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • What is faster in MySQL? WHERE sub request = 0 or IN list

    - by Nicolas Manzini
    Hello I was wondering what is better in MySQL. I have a SELECT querry that exclude every entry associated to a banned userID currently I have a subquerry clause in the WHERE statement that goes like AND (SELECT COUNT(*) FROM TheBlackListTable WHERE userID = userList.ID AND blackListedID = :userID2 ) = 0 Which will accept every userID not present in the TheBlackListTable Would it be faster to retrieve first all Banned ID in a previous request and replace the previous clause by AND creatorID NOT IN listOfBannedID Thank you!

    Read the article

  • MySQL Table structure of thumb UP & DOWN for comments system ?

    - by Axel
    Hello, i already created a table for comments but i want to add the feature of thumb Up and Down for comments like Digg and Youtube, i use php & mysql and i'm wondering What's the best table scheme to implement that so comments with many likes will be on the top. This is my current comments table : comments(id,user,article,comment,stamp) Note: Only registred will be able to vote, so there isn't need to restrict the votes by IP Thanks

    Read the article

  • How to write a good PHP database insert using an associative array

    - by Tom
    In PHP, I want to insert into a database using data contained in a associative array of field/value pairs. Example: $_fields = array('field1'=>'value1','field2'=>'value2','field3'=>'value3'); The resulting SQL insert should look as follows: INSERT INTO table (field1,field2,field3) VALUES ('value1','value2','value3'); I have come up with the following PHP one-liner: mysql_query("INSERT INTO table (".implode(',',array_keys($_fields)).") VALUES (".implode(',',array_values($_fields)).")"); It separates the keys and values of the the associative array and implodes to generate a comma-separated string . The problem is that it does not escape or quote the values that were inserted into the database. To illustrate the danger, Imagine if $_fields contained the following: $_fields = array('field1'=>"naustyvalue); drop table members; --"); The following SQL would be generated: INSERT INTO table (field1) VALUES (naustyvalue); drop table members; --; Luckily, multiple queries are not supported, nevertheless quoting and escaping are essential to prevent SQL injection vulnerabilities. How do you write your PHP Mysql Inserts? Note: PDO or mysqli prepared queries aren't currently an option for me because the codebase already uses mysql extensively - a change is planned but it'd take alot of resources to convert?

    Read the article

  • Idiomatic way to read .env variables in Ansible?

    - by Arms
    I'm provisioning a Vagrant box with Ansible, and using Benno Joy's MySQL role to setup MySQL (including creating a database and users.) The database name and credentials are stored in a .env file in the project's root. What would be the idiomatic way to use these variables when provisioning MySQL? Should I write a custom script that generates a YAML file from my .env, and then use the include_vars module? Or is there a simpler way?

    Read the article

  • MAMP Pro mysqld won't start on os x lion

    - by Mike
    getting a Start MySQL Failed error in the GUI.. when i attempt to start mysqld from the CLI i get the following error: ? /Applications/MAMP/Library/bin/mysqld 120623 23:12:47 [Warning] Setting lower_case_table_names=2 because file system for /Applications/MAMP/db/mysql/ is case insensitive 120623 23:12:47 [Note] Plugin 'FEDERATED' is disabled. 120623 23:12:47 InnoDB: The InnoDB memory heap is disabled 120623 23:12:47 InnoDB: Mutexes and rw_locks use GCC atomic builtins 120623 23:12:47 InnoDB: Compressed tables use zlib 1.2.3 120623 23:12:47 InnoDB: Initializing buffer pool, size = 128.0M 120623 23:12:47 InnoDB: Completed initialization of buffer pool 120623 23:12:47 InnoDB: highest supported file format is Barracuda. 120623 23:12:47 InnoDB: Waiting for the background threads to start 120623 23:12:48 InnoDB: 1.1.5 started; log sequence number 1595675 120623 23:12:48 [ERROR] /Applications/MAMP/Library/bin/mysqld: unknown option '--skip-locking' 120623 23:12:48 [ERROR] Aborting 120623 23:12:48 InnoDB: Starting shutdown... 120623 23:12:49 InnoDB: Shutdown completed; log sequence number 1595675 120623 23:12:49 [Note] /Applications/MAMP/Library/bin/mysqld: Shutdown complete i have deleted the mysql.pid file located at /application/mamp/tmp/mysql/mysql.pid and i still get the error above. I can't find where MAMP has set --skip-locking set, my.cnf doesnt have it anywhere. Activity monitor gives me a mysqld process running by me, and everytime i KILL the process both via Activity Monitor and via kill =9 pid it starts right back up.. Sampling the process points back to the MAMP mysqld.. wtf?! About to throw MAMP out the window and boot up a VM of CentOS =)

    Read the article

  • Mysql question: is there something like IN ALL query?

    - by jaycode
    For example this query: SELECT `variants`.* FROM `variants` INNER JOIN `variant_attributes` ON variant_attributes.variant_id = variants.id WHERE (variant_attributes.id IN ('2','5')) And variant has_many variant_attributes What I actually want to do is to find which variant has BOTH variant attributes with ID = 2 and 5. Is this possible with MySQL? Bonus Question, is there a quick way to do this with Ruby on Rails, perhaps with SearchLogic?

    Read the article

  • Zend Framework: How to download file from mySql Blob field.

    - by Awan
    I am uploading files(any type) in MySql tables's blob field. Now I am able to get binary data from that field and when I print it, it shows binary data in firbug console. But I want to download that file as it was uploaded. How can I convert this binary data into orignal file? How to do it in zend? Thanks

    Read the article

  • Convert MSAccess Project Management Application to PHP/MySQL: Which Methodology?

    - by zzapper
    I've got to convert a not terribly complicated bespoke project management system from MsAccess Application to PHP/MySQL. I've been programming for donkey's years but embarrassingly know practically nothing about modern methodologies. So the old 'learning curve' versus 'improved efficiency' conundrum rears its ugly head once again. Although I've Googled up some stuff I don't want to prejudice your suggestions, where would you start, I'm at your mercy?

    Read the article

  • Why is access to my database very slow?

    - by Fabien
    I have a mysql database that used to work perfectly fine, but now it is dead slow on startup. When I type in $> mysql -u foo bar I get the following usual message for about 30 seconds before I get a prompt : Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Of course, I tried it and it goes a lot faster : $> mysql -u foo bar -A But why do I have to wait so long in regular startup ? This is not a very big database, and data does not seem to be corrupted (everything looks fine after startup). I have no other client connecting to the mysql server at the same time (only one process is shown with the command show full processlist) and I have already restarted the mysqld service. What's going on ?

    Read the article

  • Is it possible to keep mysql migration running without keeping connection open?

    - by taw
    ALTER TABLE can easily take a few days - and during this time there's a non-negligible chance of connection getting terminated due to network problems. Is it possible to start ALTER TABLE (or CREATE TABLE ... SELECT ...; or some other very long running query) and leave it running without keeping connection open all the time? (the obvious solution of screen + console mysql client won't easily work as there's no ssh running on that server, only mysqld).

    Read the article

  • JDBC CLASSPATH Not Working

    - by AeroDroid
    I'm setting up a simple JDBC connection to my working MySQL database on my server. I'm using the Connector-J provided by MySQL. According to their documentation, I'm suppose to create the CLASSPATH variable to point to the directory where the mysql-connector-java-5.0.8-bin.jar is located. I used export set CLASSPATH=/path/mysql-connector-java-5.0.8-bin.jar:$CLASSPATH. When I type echo $CLASSPATH to see if it exists, everything seems fine. But then when I open a new terminal and type echo $CLASSPATH it's no longer there. I think this is the main reason why my Java server won't connect to the JDBC, because it isn't saving the CLASSPATH variable I set. Anyone got suggestions or fixes on how to set up JDBC in the first place?

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • Reading an XML file and store data to mysql database.

    - by Jack Brown
    Hi I need the following php script to do a currency conversion using a different XML file. Its a script from white-hat design http://www.white-hat-web-design.co.uk/articles/php-currency-conversion.php The script needs to be amended to do the following: 1, The php script downloads every 24 hours an xml file from rss.timegenie.com/foreign_exchange_rates_forex rss.timegenie.com/forex.xml rss.timegenie.com/forex2.xml 2, It then stores the xml file data/contents to a mysql database file ie currency and rate. Any advice would be appreciated.

    Read the article

  • insert into several inheritance tables with OUTPUT - sql servr 2005

    - by csetzkorn
    Hi, I have a bunch of items – for simplicity reasons – a flat table with unique names seeded via bulk insert: create table #items ( ItemName NVARCHAR(255) ) The database has this structure: create table Statements ( Id INT IDENTITY NOT NULL, Version INT not null, FurtherDetails varchar(max) null, ProposalDateTime DATETIME null, UpdateDateTime DATETIME null, ProposerFk INT null, UpdaterFk INT null, primary key (Id) ) create table Item ( StatementFk INT not null, ItemName NVARCHAR(255) null, primary key (StatementFk) ) Here Item is a child of Statement (inheritance). I would like to insert items in #items using a set based approach (avoiding triggers and loops). Can this be achieved with OUTPUT in my scenario. A ‘loop based’ approach is just too slow where I use something like this: insert into Statements (Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk) VALUES (1, null, getdate(), getdate(), @user_id, @user_id) etc. This is a start for the OUTPUT based approach – but I am not sure whether this would work in my case as ItemName is only inserted into Item: insert into Statements ( Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk ) output inserted.Id ... ??? Thanks. Best wishes, Christian

    Read the article

  • INSERT INTO statement that copies rows and auto-increments non-identity key ID column

    - by AmoebaMan17
    Given a table that has three columns ID (Primary Key, not-autoincrementing) GroupID SomeValue I am trying to write a single SQL INSERT INTO statement that will make a copy of every row that has one GroupID into a new GroupID. Example beginning table: ID | GroupID | SomeValue ------------------------ 1 | 1 | a 2 | 1 | b Goal after I run a simple INSERT INTO statement: ID | GroupID | SomeValue ------------------------ 1 | 1 | a 2 | 1 | b 3 | 2 | a 4 | 2 | b I thought I could do something like: INSERT INTO MyTable ( [ID] ,[GroupID] ,[SomeValue] ) ( SELECT (SELECT MAX(ID) + 1 FROM MyTable) ,@NewGroupID ,[SomeValue] FROM MyTable WHERE ID = @OriginalGroupID ) This causes a PrimaryKey violation since it will end up reusing the same Max(ID)+1 value multiple times as it seems. Is my only recourse to a bunch of INSERT statements in a T-SQL WHILE statement that has an incrementing Counter value? I also don't have the option of turning the ID into an auto-incrementing Identity column since that would breaking code I don't have source for.

    Read the article

  • Getting row right after insert returns no result

    - by Peekyou
    I am running unit tests and when I try to insert data in the database and getting it right after, I don't get anything (I have tried with DataAdapter and DataReader). However when I put a 3 seconds sleep (even with 1 second it doesn't work...) between the insert and the select I get the result. In SQL Server Profiler I can see the execution, the insert is well done and is completed about 10 miliseconds before the select begins. I can't find out where this comes

    Read the article

  • How to track auto-generated id's in select-insert statement

    - by k rey
    I have two tables detail and head. The detail table will be written first. Later, the head table will be written. The head is a summary of the detail table. I would like to keep a reference from the detail to the head table. I have a solution but it is not elegant and requires duplicating the joins and filters that were used during summation. I am looking for a better solution. The below is an example of what I currently have. In this example, I have simplified the table structure. In the real world, the summation is very complex. -- Preparation create table #detail ( detail_id int identity(1,1) , code char(4) , amount money , head_id int null ); create table #head ( head_id int identity(1,1) , code char(4) , subtotal money ); insert into #detail ( code, amount ) values ( 'A', 5 ); insert into #detail ( code, amount ) values ( 'A', 5 ); insert into #detail ( code, amount ) values ( 'B', 2 ); insert into #detail ( code, amount ) values ( 'B', 2 ); -- I would like to somehow simplify the following two queries insert into #head ( code, subtotal ) select code, sum(amount) from #detail group by code update #detail set head_id = h.head_id from #detail d inner join #head h on d.code = h.code -- This is the desired end result select * from #detail Desired end result of detail table: detail_id code amount head_id 1 A 5.00 1 2 A 5.00 1 3 B 2.00 2 4 B 2.00 2

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >