Search Results

Search found 40581 results on 1624 pages for 'mysql select db'.

Page 386/1624 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • Web development scheme for staging and production servers using Git Push

    - by ServAce85
    I am using git to manage a dynamic website (PHP + MySQL) and I want to send my files from my localhost to my staging and development servers in the most efficient and hassle-free way. I am currently convinced that the best way for me to approach this problem is to use this git branching model to organize my local git repo. From there, I will use the release branches to push to my staging server for testing. Once I am happy that the release code works on the staging server, I can then merge with my master branch and push that to my production server. Pushing to Staging Server: As noted in many introductory git posts, I could run into problems pushing into a non-bare repo, so, as suggested in this response, I plan to push the release branch to a bare repo on the server and have a post-receive hook that clones the bare repo to a non-bare repo that also acts as the web-hosted directory. Pushing to Production Server: Here's my newest source of confusion... In the response that I cited above, it made me curious as to why @Paul states that it's a completely different story when pushing to a live, development server. I guess I don't see the problem. Would it be safe and hassle-free to follow the same steps as above, but for the master branch? Where are the potential pit-falls? Config Files: With respect to configuration files that are unique to each environment (.htaccess, config.php, etc), it seems simplest to .gitignore each of those files in their respective repos on their respective servers. Can you see anything immediately wrong with this? Better solutions? Accessing Data: Finally, as I initially stated, the site uses MySQL databases to store data. How would you suggest I access that data (for testing purposes) from the staging server and localhost? I realize that I may have asked way too many questions for a single post, but since they're all related to the best way to set up this development scheme, I thought it was necessary.

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • problem with phpMyAdmin advanced features

    - by typoknig
    Hi all, I am having trouble putting the final touches on my MySQL/Apache/phpMyAdmin install on a Windows XP system. I am trying to get rid of all the error message in phpMyAdmin and I have gotten rid of all of them except the ones related to "advanced features." The exact error message I have is : The additional features for working with linked tables have been deactivated. To find out why click here. I have read up on the cause of the errors but I must be missing something because I still cannot get the warning to go away. Here is what I have done: Created a linked-tables infrastructure (default name "phpmyadmin") per the phpMyAdmin instructions and enabled "pmadb" in my "config.inc.php" file. Specified (enabled) the table names in my "config.inc.php" file (there are 9 tables total). Created a "controluser" and granted only Select privilages per phpMyAdmin instructions Adjusted "controluser" pma and "controlpass" pmapass in "config.inc.php" file From what I can see these are all the instruction phpMyAdmin gives on this subject, and I am unable to locate any tutorials on the specifics of "advanced features" in phpMyAdmin. Any help would be appreciated, and be gentle, this is my first go with MySQL/phpMyAdmin

    Read the article

  • Connect Rails model to non-rails database

    - by the_snitch
    I'm creating a new web application (Rails 3 beta), of which pieces of it will access data from a legacy mysql database that a current php application is using. I do not wish to modify the legacy db schema, I just want to be able to read/write to it, as well as the rails application having it's own database using activerecord for the newer stuff. I'm using mysql for the rails app, so I have the adapter installed. How is the best way to do this? For example, I want contacts to come from the old database. Should I create a contacts controller, and manually call sql to get the variables for the views? Or should I create a Contact model, and define attributes that match the fields in the database, and am I able to use it like Contact.mail_address to have it call "SELECT mailaddr FROM contacts WHERE id=Contact.id". Sorry, I've never done much in Rails outside of the standard stuff that is documented well. I'm not sure of what the best approach would be. Ideally, I want the contacts to be presented to my rails application as native as possible, so that I can expose them RESTfully for API access. Any suggestions and code examples would be much appreciated

    Read the article

  • Variable not implementing correctly from if statement

    - by swiftsly
    I have an if statement that is supposed to set the variable $pc162v to a link specified in the MySQL table if content exists in the $vid column of the row. The problem is, the PHP is detecting that there's a link in the MySQL, but isn't setting the $pc162v variable correctly. Here's the variable declarations: $pc162v = ""; $vid162 = '<embed width="420" height="236" src="'.$pc162v.'" type="application/x-shockwave-flash"></embed>'; Here's the section of the if statement: if (empty($row[7])) { $vid162 = ''; } else { $pc162v = $row[7]; } In my web browsers, the part of the code where the variable $vid162 is used, shows up as the following: <embed width="420" height="236" src="" type="application/x-shockwave-flash"> I have also tried setting $vid162 to: <embed width="420" height="236" src="<?php echo $pc162v; ?>" type="application/x-shockwave-flash"></embed> and that just makes the code in my web browser: <embed width="420" height="236" src="<?php echo $pc162v; ?>" type="application/x-shockwave-flash"> Hope someone has a solution! Thanks in advance.

    Read the article

  • Database contents setting themselves to 0

    - by Luis Armando
    I have a Database that contains 4 tables, however I'm using 1 of them which is separated from the others. In this table I have 4 fields which are varchar and the rest are ints (11 other fields), when the users fill up the DB everything gets saved correctly, however it has happened 3 times so far that the database values for the int's reset to 0 without any apparent reason. At first, I thought, it was because those fields (where the numbers should go) were varchars not ints. However since I changed it, it happened again. I've already double checked my code and I have nothing that even updates or inserts a 0 value. Also I'm using codeigniter and active records which protect against SQL injections AND have XSS filtering enabled, could anyone point out something I might be missing or a reason for this to be happening? Also, I'm pretty sure about the answer of this but, is there ANY way to recover some data?? Other than having to ask everyone to fill in everything again.. =/ ** EDIT ** The Storage Engine is MyISAM and Collation is latin1_swedish_ci, Pack Keys are default, for all intents and purposes it's a normal DB

    Read the article

  • Having a problem inserting into database

    - by neo skosana
    I have a stored procedure: CREATE PROCEDURE busi_reg(IN instruc VARCHAR(10), IN tble VARCHAR(20), IN busName VARCHAR(50), IN busCateg VARCHAR(100), IN busType VARCHAR(50), IN phne VARCHAR(20), IN addrs VARCHAR(200), IN cty VARCHAR(50), IN prvnce VARCHAR(50), IN pstCde VARCHAR(10), IN nm VARCHAR(20), IN surname VARCHAR(20), IN eml VARCHAR(50), IN pswd VARCHAR(20), IN srce VARCHAR(50), IN refr VARCHAR(50), IN sess VARCHAR(50)) BEGIN INSERT INTO b_register SET business_name = busName, business_category = busCateg, business_type = busType, phone = phne, address = addrs, city = cty, province = prvnce, postal_code = pstCde, firstname = nm, lastname = surname, email = eml, password = pswd, source = srce, ref_no = refr; END; This is my php script: $busName = $_POST['bus_name']; $busCateg = $_POST['bus_cat']; $busType = $_POST['bus_type']; $phne = $_POST['phone']; $addrs = $_POST['address']; $cty = $_POST['city']; $prvnce = $_POST['province']; $pstCde = $_POST['postCode']; $nm = $_POST['name']; $surname = $_POST['lname']; $eml = $_POST['email']; $srce = $_POST['source']; $ref = $_POST['ref_no']; $result2 = $db->query("CALL busi_reg('$instruc', '$tble', '$busName', '$busCateg', '$busType', '$phne', '$addrs', '$cty', '$prvnce', '$pstCde', '$nm', '$surname', '$eml', '$pswd', '$srce', '$refr', '')"); if($result) { echo "Data has been saved"; } else { printf("Error: %s\n",$db->error); } Now the error that I get: Commands out of sync; you can't run this command now

    Read the article

  • How can I determine when an InnoDB table was last changed?

    - by David M
    I've had success in the past storing the (heavily) processed results of a database query in memcached, using the last update time of the underlying tables(s) as part of the cache key. For MyISAM tables, that last changed time is available in SHOW TABLE STATUS. Unfortunately, that's usually NULL for InnoDB tables. In MySQL 4.1, the ctime for an InnoDB in its SHOW TABLE STATUS line was usually its actual last update time, but that doesn't seem to be true for MySQL 5.1. There is a DATETIME field in the table, but it only shows when a row has been modified - it cannot show the deletion time of a row that's not there anymore! So, I really cannot use MAX(update_time). Here's the really tricky part. I have a number of replicas that I do reads from. Can I figure out the state of the table that doesn't rely on when the changes have actually been applied? My conclusion after working on this for a while is that it's not going to be possible to get this information as cheaply as I'd like. I'm probably going to cache data until the time that I expect the table to change (it's updated once a day), and let the query cache help out where it can.

    Read the article

  • Django tutorial says I haven't set DATABASE_ENGINE setting yet... but I have

    - by Joe
    I'm working through the Django tutorial and receiving the following error when I run the initial python manage.py syncdb: Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362 in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 351, in handle return self.handle_noargs(**options) File "/Library/Python/2.6/site-packages/django/core/management/commands/syncdb.py", line 49, in handle_noargs cursor = connection.cursor() File "/Library/Python/2.6/site-packages/django/db/backends/dummy/base.py", line 15, in complain raise ImproperlyConfigured, "You haven't set the DATABASE_ENGINE setting yet." django.core.exceptions.ImproperlyConfigured: You haven't set the DATABASE_ENGINE setting yet. My settings.py looks like: DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. 'NAME': 'dj_tut', # Or path to database file if using sqlite3. 'USER': '', # Not used with sqlite3. 'PASSWORD': '', # Not used with sqlite3. 'HOST': '', # Set to empty string for localhost. Not used with sqlite3. 'PORT': '', # Set to empty string for default. Not used with sqlite3. } } I'm guessing this is something simple, but why isn't it seeing the ENGINE setting?

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • No operations allowed after statement closed issue

    - by Washu
    I have the next methods in my singleton to execute the JDBC connections public void openDB() throws ClassNotFoundException, IllegalAccessException, InstantiationException, SQLException { Class.forName("com.mysql.jdbc.Driver").newInstance(); String url = "jdbc:mysql://localhost/mbpe_peru";//mydb conn = DriverManager.getConnection(url, "root", "admin"); st = conn.createStatement(); } public void sendQuery(String query) throws SQLException { st.executeUpdate(query); } public void closeDB() throws SQLException { st.close(); conn.close(); } And I'm having a problem in a void where i have to call this twice. private void jButton1ActionPerformed(ActionEvent evt) { Main.getInstance().openDB(); Main.getInstance().sendQuery("call insertEntry('"+EntryID()+"','"+SupplierID()+"');"); Main.getInstance().closeDB(); Main.getInstance().openDB(); for(int i=0;i<dataBox.length;i++){ Main.getInstance().sendQuery("call insertCount('"+EntryID()+"','"+SupplierID()+"','"+BoxID()+"'); Main.getInstance().closeDB(); } } I have already tried to keep the connection open and send the 2 querys and after that closed and it didnt work... The only way it worked was to not use the methods, declare the commands for the connection and use different variables for the connection and the statement. I thought that if i close the Connecion and the Statement I could use the variable once again since is a method but I'm not able to. Is there any way to solve this using my methods for the JDBC connection?

    Read the article

  • Smart way to execute function in flash from an imported html (or XML) plain text

    - by DomingoSL
    Ok, here is the think. I have a very simple flash program code in AS3, inside this program there is a dynamic textfield html capable. Also i have a database were i will put some information. Each element in the database can be linked to other. For example, this database will contain tourist spots, and they can be related with others in the same database. Ramdom place 1 This interesting spot is near <link to="ramdo2">ramdom place2</link>. And so, so so... The information in the database is retrived by the flash application and it shows the text on the dynamic textfield. I need somehow to tell my flash application that have to be a link to other part of the database, so when the user click the link a function in flash call this new data. The database is a simple Mysql server, and the data is not yet in there, waiting for your suggestions of how to format it and develop the solution. Im a php developer too, so i can make a gateway in PHP to read the MySQL and then retrive some format to flash, an XML for example.

    Read the article

  • Why should I abstract my data layer?

    - by Gazillion
    OOP principles were difficult for me to grasp because for some reason I could never apply them to web development. As I developed more and more projects I started understanding how some parts of my code could use certain design patterns to make them easier to read, reuse, and maintain so I started to use it more and more. The one thing I still can't quite comprehend is why I should abstract my data layer. Basically if I need to print a list of items stored in my DB to the browser I do something along the lines of: $sql = 'SELECT * FROM table WHERE type = "type1"';' $result = mysql_query($sql); while($row = mysql_fetch_assoc($result)) { echo '<li>'.$row['name'].'</li>'; } I'm reading all these How-Tos or articles preaching about the greatness of PDO but I don't understand why. I don't seem to be saving any LoCs and I don't see how it would be more reusable because all the functions that I call above just seem to be encapsulated in a class but do the exact same thing. The only advantage I'm seeing to PDO are prepared statements. I'm not saying data abstraction is a bad thing, I'm asking these questions because I'm trying to design my current classes correctly and they need to connect to a DB so I figured I'd do this the right way. Maybe I'm just reading bad articles on the subject :) I would really appreciate any advice, links, or concrete real-life examples on the subject!

    Read the article

  • Regularly update database without browser/user

    - by Chris M
    I currently have a MySQL database which I was hoping to use to store regularly updated data from a temperature sensor connected to the internet. I currently have a page that, when opened, will grab the current temperature and the current timestamp and add it as an entry to the database, but I was looking for a way to do that without me refreshing the page every 5 seconds. Detail: The data comes from an Arduino Ethernet, posted to an IP address. Currently, I'm using cURL to grab the data from the IP, add a timestamp and save it to the DB. Obviously only updates when the page is refreshed (it uses PHP). Here is a live feed of the data - http://wetdreams.org.uk/ChrisProject/UI/live_graph_two.html TL;DR - Basically I need a middle man to grab the data from the IP and post it to a MySQL Edit: Thanks for all the advice. There might be a little bit of confusion, I'm looking for a solution that (ideally) doesn't require a computer to be on at all (other than the Server containing Database). Since I'm looking to store data over long periods of time (weeks), I'd like to set it up and leave a script running on the server (or Arduino) that gets the temp and posts it to the Database. In my head I would like to have a page on the server that automatically (without any browser open, or any other prompting other than a timer) calls a PHP script. Hope that clears things up!

    Read the article

  • Jquery / PhP / Joomla Select one of two comboboxes does not get updated

    - by bluesbrother
    I am making a Joomla component wich has 3 comboboxes/selects on the page. One with languages and 2 with subjects. If you change the language the other two get filled with the same data (the subjects in the selected language) the name of the selectbox are different but otherwise the same. I get an error for one of the subject boxes (hence the url gets red), but there is no logic in wich one will give an error. In Firebug i get the HTML back for the one without the other and this one gets updated but the other one gives nothing back. If i right click in firebug on the one that gave the error, and do "send again" it will load fine. Is their a timing problem? The change event of the language selectbox: jQuery('#cmbldcoi_ldlink_language').bind('change', function() { var cmbLangID = jQuery('#cmbldcoi_ldlink_language').val(); if (cmbLangID !=0) { getSubjectCmb_lang(cmbLangID, 'cmbldcoi_ldlink_subjects', '#ldlinksubjects'); } }); Function that requests the php file to create the html for the select: function getSubjectCmb_lang(langID, cmbName, DivWhereIn) { var xdate = new Date().getTime(); var url = 'index.php?option=com_ldadmin&view=ldadmin&format=raw&task=getcmbsubj_lang&langid=' + langID + '&cmbname=' + cmbName + '&'+ xdate; jQuery(DivWhereIn).load(url, function(){ }); } And in the php file there is a connection made to the database to ge the information to build the selectbox. I use a function for this that is okay because it makes al my selectboxes. The only place where there are problems with select boxes is on the pages that has 2 selects that need to change when a third one changed. My guess it is somewhere in the Jquery where this goes wrong. And i think it has to do with timing. But i am open for all sugestions. Thanx. UPDATE: No the ID and Name fields are different. They are named : cmbldcoi_child cmbldcoi_parent Here is my code: The change event for the first combobox which makes the other two change: jQuery('#cmbldcoi_language_chain_subj').bind('change', function(){ var langID = jQuery('#cmbldcoi_language_chain_subj').val(); if (langID != 0){ getSubjectCmb_lang(langID, 'cmbldcoi_child', '#div_cmbldcoi_child'); getSubjectCmb_lang(langID, 'cmbldcoi_parent', '#div_cmbldcoi_parent'); } }); } The function wicht calls the php file to get the info from the database: function getSubjectCmb_lang(langID, cmbName, DivWhereIn){ var xdate = new Date().getTime(); var url = 'index.php?option=com_ldadmin&view=ldadmin&format=raw&task=getcmbsubj_lang&langid=' + langID + '&cmbname=' + cmbName + '&'+ xdate; jQuery(DivWhereIn).load(url, function(){ }); } The PHP code function getcmbsubj_lang(){ $langid = JRequest::getVar('langid'); if ($langid > 0 ){ $langid = JRequest::getVar('langid'); }else{ $langid = 1; } $cmbName = JRequest::getVar('cmbname'); //$lang_sufx = self::get_#__sufx($langid); print ld_html::ld_create_cmb_html($cmbName, '#__ldcoi_subjects','id', 'subject_name', " WHERE id_language={$langid} ORDER BY subject_name" ); } There is a class wich is called ld_html wich has an funnction in it that creates a combobox. ld_html::ld_create_cmb_html() It gets an table name, id field, namefield and optional an where clause. It all works fine if there is just one combobox thats needs updating. It give a problem when there are two. Thanks for the help !

    Read the article

  • SQL: find entries in 1:n relation that don't comply with condition spanning multiple rows

    - by milianw
    I'm trying to optimize SQL queries in Akonadi and came across the following problem that is apparently not easy to solve with SQL, at least for me: Assume the following table structure (should work in SQLite, PostgreSQL, MySQL): CREATE TABLE a ( a_id INT PRIMARY KEY ); INSERT INTO a (a_id) VALUES (1), (2), (3), (4); CREATE TABLE b ( b_id INT PRIMARY KEY, a_id INT, name VARCHAR(255) NOT NULL ); INSERT INTO b (b_id, a_id, name) VALUES (1, 1, 'foo'), (2, 1, 'bar'), (3, 1, 'asdf'), (4, 2, 'foo'), (5, 2, 'bar'), (6, 3, 'foo'); Now my problem is to find entries in a that are missing name entries in table b. E.g. I need to make sure each entry in a has at least the name entries "foo" and "bar" in table b. Hence the query should return something similar to: a_id = 3 is missing name "bar" a_id = 4 is missing name "foo" and "bar" Since both tables are potentially huge in Akonadi, performance is of utmost importance. One solution in MySQL would be: SELECT a.a_id, CONCAT('|', GROUP_CONCAT(name ORDER BY NAME ASC SEPARATOR '|'), '|') as names FROM a LEFT JOIN b USING( a_id ) GROUP BY a.a_id HAVING names IS NULL OR names NOT LIKE '%|bar|foo|%'; I have yet to measure the performance tomorrow, but severly doubt it's any fast for tens of thousand of entries in a and thrice as many in b. Furthermore we want to support SQLite and PostgreSQL where to my knowledge the GROUP_CONCAT function is not available. Thanks, good night.

    Read the article

  • md5_file() not working with IP addresses?

    - by Rob
    Here is my code relating to the question: $theurl = trim($_POST['url']); $md5file = md5_file($theurl); if ($md5file != '96a0cec80eb773687ca28840ecc67ca1') { echo 'Hash doesn\'t match. Incorrect file. Reupload it and try again'; When I run this script, it doesn't even output an error. It just stops. It loads for a bit, and then it just stops. Further down the script I implement it again, and it fails here, too: while($row=mysql_fetch_array($execquery, MYSQL_ASSOC)){ $hash = @md5_file($row['url']); $url = $row['url']; mysql_query("UPDATE urls SET hash='" . $hash . "' WHERE url='" . $url . "'") or die("MYSQL is indeed gay: ".mysql_error()); if ($hash != '96a0cec80eb773687ca28840ecc67ca1'){ $status = 'down'; }else{ $status = 'up'; } mysql_query("UPDATE urls SET status='" . $status . "' WHERE url='" . $url . "'") or die("MYSQL is indeed gay: ".mysql_error()); } And it checks all the URL's just fine, until it gets to one with an IP instead of a domain, such as: http://188.72.215.195/config.php In which, again, the script then just loads for a bit, and then stops. Any help would be much appreciated, if you need any more information just ask.

    Read the article

  • update record only works when there is no auto_increment

    - by every_answer_gets_a_point
    i am accessing a mysql table through an odbc connection in excel here is how i am updating the table: With rs .AddNew ' create a new record ' add values to each field in the record .Fields("datapath") = dpath .Fields("analysistime") = atime .Fields("reporttime") = rtime .Fields("lastcalib") = lcalib .Fields("analystname") = aname .Fields("reportname") = rname .Fields("batchstate") = "bstate" .Fields("instrument") = "NA" .Update ' stores the new record End With when the schema of the table is this, updating it works: create table batchinfo(datapath text,analysistime text,reporttime text,lastcalib text,analystname text, reportname text, batchstate text, instrument text); but when i have auto_increment in there it does not work: CREATE TABLE batchinfo ( rowid int(11) NOT NULL AUTO_INCREMENT, datapath text, analysistime text, reporttime text, lastcalib text, analystname text, reportname text, batchstate text, instrument text, PRIMARY KEY (rowid) ) ENGINE=InnoDB AUTO_INCREMENT=67 DEFAULT CHARSET=latin1 has anyone experienced a problem like this where updating does not work when there is an auto_increment field involved? connection string: Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=pas;" & _ "Option=3" End Sub also here's the rs.open: rs.Open "batchinfo", oConn, adOpenKeyset, adLockOptimistic, adCmdTable

    Read the article

  • SQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. I'm using MySQL 4.1+, but if you know the answer for another engine, just share. Appreciated advice in advance :)

    Read the article

  • PHP PDO Parameters from a function returned array

    - by noko
    I've got a function written that runs a query based on parameters passed to the function. I can't seem to figure out why doing the following returns a result: function test($function_returned_array) { $variable = 'Hello World'; $sql = 'SELECT `name`, `pid` FROM `products` WHERE `name` IN (?)'; $found = $this->db->get_array($sql, $variable); } While this doesn't return any results: function test2($function_returned_array) { $sql = 'SELECT `name`, `pid` FROM `products` WHERE `name` IN (?)'; $found = $this->db->get_array($sql, $function_returned_array[0]); } $function_returned_array[0] is also equal to 'Hello World'. Shouldn't they both return the same results? When I echo the values of $variable and $function_returned_array[0], they are both 'Hello World' Here's the relevant parts of my PDO wrapper: public function query(&$query, $params) { $sth = $this->_db->prepare($query); if(is_null($params)) { $sth->execute(); } else if(is_array($params)) { $sth->execute($params); } else { $sth->execute(array($params)); } $this->_rows = $sth->rowCount(); $this->_counter++; return $sth; } public function get_array(&$query, $params, $style = PDO::FETCH_ASSOC) { $q = $this->query($query, $params); return $q->fetchAll($style); } I'm using PHP 5.3.5. Any help would be appreciated.

    Read the article

  • How to lock a transaction for reading a row and then inserting in Hibernate?

    - by at
    I have a table with a name and a name_count. So when I insert a new record, I first check what the maximum name_count is for that name. I then insert the record with that maximum + 1. Works great... except with mysql 5.1 and hibernate 3.5, by default the reads don't respect transaction boundaries. 2 of these inserts for the same name could happen at the same time and end up with the same name_count, which completely screws my application! Unfortunately, there are some specific situations where the above is actually fairly common. So what do I do? I assume I can do a pessimistic lock where any row I read is locked for further reading until I commit or roll-back my transaction. Or I can do an optimistic lock with a version column that automatically keeps trying until there are no conflicts? What's the best approach for my situation and how do I specify it in Hibernate 3.5 and mysql 5.1? The above table is massive and accessed frequently.

    Read the article

  • Error in inserting data into database

    - by Matthew
    When I try to run this code it will not insert the data into the database? <?php class Database { private $dsn; function __construct($dbname, $host, $user, $password, $enckey) { $this->dsn = "mysql:dbname=" . $dbname . ';host=' . $host; $this->user = $user; $this->password = $password; } private function createDSN() { return $this->dsn; } public function createConnection() { try { $dbh = new PDO(self::createDSN(), $this->user, $this->password); } catch (PDOException $e) { echo 'Connection failed: ' . $e->getMessage(); } return $dbh; } } $db = new Database('mytest', 'localhost', 'root', 'hashedpassword', null); $dbh = $db->createConnection(); $sql = $dbh->prepare("INSERT INTO contacts (firstname, lastname) VALUES (?,?)"); $sql->execute(array("abc", "xyz")); ?>

    Read the article

  • Best Approach for Checking and Inserting Records

    - by nevets1219
    In one of our existing C programs which purpose is: Open connection to DB for record in all_record: if record contain certain data: if record is NOT in table A: // see #1 insert record information into table A and B // see #2 Close connection to DB select field from table where field=XXX 2 inserts This is typically done every X months to sync everything up or so I'm told. I've also been told that this process takes roughly a couple of days. There is (currently) at most 2.5million records (though not necessarily all 2.5m will be inserted). One of the table contains 10 fields and the other 5 fields. There isn't much to be done about iterating through the records since that part can't be changed at the moment. What I would like to do is speed up the part where I query MySQL. I'm not sure if I have left out any important details -- please let me know! I'm also no SQL expert so feel free to point out the obvious. I thought about: Putting all the inserts into a transaction (at the moment I'm not sure how important it is for the transaction to be all-or-none or if this affects performance) Using Insert X Where Not Exists Y LOAD DATA INFILE (but that would require I create a (possibly) large temp file) I read that (hopefully someone can confirm) I should drop indexes so they aren't re-calculated. mysql Ver 14.7 Distrib 4.1.22, for sun-solaris2.10 (sparc) using readline 4.3

    Read the article

  • While in a transaction, how can reads to an affected row be prevented until the transaction is done?

    - by Mahn
    I'm fairly sure this has a simple solution, but I haven't been able to find it so far. Provided an InnoDB MySQL database with the isolation level set to SERIALIZABLE, and given the following operation: BEGIN WORK; SELECT * FROM users WHERE userID=1; UPDATE users SET credits=100 WHERE userID=1; COMMIT; I would like to make sure that as soon as the select inside the transaction is issued, the row corresponding to userID=1 is locked for reads until the transaction is done. As it stands now, UPDATEs to this row will wait for the transaction to be finished if it is in process, but SELECTs simply will read the previous value. I understand this is the expected behaviour in this case, but I wonder if there is a way to lock the row in such a way that SELECTs will also wait until the transaction is finished to return the values? The reason I'm looking for that is that at some point, and with enough concurrent users, it could happen that while the previous transaction is in process someone else reads the "credits" to calculate something else. Ideally the code run by that someone else should wait for the transaction to finish to use the new value, because otherwise it could lead to irreversible desync issues. Note that I don't want to lock the entire table for reads, just the specific row. Also, I could add a boolean "locked" field to the tables and set it to 1 every time I'm starting a transaction but I don't really feel this is the most elegant solution here, unless there is absolutely no other way to handle this through mysql directly.

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >