Search Results

Search found 34274 results on 1371 pages for 'mysql table'.

Page 434/1371 | < Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >

  • Select value from database and store into a temporary variable

    - by user1616230
    I want to select a stored value from database and then put it into a temporary variable. For example, I have a column called category, one value under it is m, so I want to select this m value from the database, let's say from a table of a database called user_info. Then I want to put it into a variable, let's name it $res. After that, I want to do some condition stuff, such as if $res=="m", Can anyone help me write a simple structure here? Here is the code: <?php $sql = "Select category FROM user_info WHERE user_name = '" .$_SESSION['username']."' and password = '".$_SESSION['password']."'"; $res = mysql_query($sql); if($res == "a"){ include('MPIncomeStrategy.php'); } if($res == "b"){ include('MPIncomeStrategy.php'); } But it seems that the code is not able to detect $res =="category value in database". Did I just use the wrong way to store the category value?

    Read the article

  • Rails: User specific sequential column

    - by Alex Marchant
    I have an inventory system, where a User has many inventory. We have a barcode column which needs to be sequential for each user. I run into a problem however when doing bulk association building. I end up getting several inventories for a user with the same barcode. For example: Inventory Table: id | user_id | barcode 1 | 1 | 1 2 | 1 | 2 3 | 2 | 1 4 | 2 | 2 5 | 1 | 3 In the Inventory model I have before_validation :assign_barcode, on: :create def assign_barcode self.barcode = (user.inventories.order(barcode: :desc).first.try(:barcode) || 0) + 1 end It generally works, but ran into a problem when seeding my db: (1..5).each do user.inventories.build(...) end user.save I end up with a bunch of inventories for user that have the same barcode. How can I ensure that inventories have unique barcodes even when adding inventories in bulk?

    Read the article

  • R statistics, change ranked tables to paired

    - by cousin_pete
    I have data for many tables like: event_id player finish 1 a 1 1 b 2 1 c 3 1 d 4 2 b 1 2 e 2 2 f 3 2 a 3 2 g 5 Many event_id's, each from 5 to 20 players, finish may be tied. In order to use conditional logistic regression in R I would like to reformat the tables to be like: event_id player1 player2 result 1 a b 1 1 a c 1 1 a d 1 1 b c 1 1 b d 1 1 c d 1 2 b e 1 2 b f 1 2 b a 1 2 b g 1 2 e f 1 2 e a 1 2 e g 1 2 f a 0.5 2 f g 1 2 a g 1 An event_id of 4 players will have 4*3/2 = 6 records in the new table, 5 players will have 5*4/2 = 10 records and so on. If player "a" has "finish" less than player "b" the "result" is 1. If "finish" is equal the "result" is 0.5. If player "a" has finish greater than player "b" then the "result" would be 0. Any help appreciated!

    Read the article

  • Dealing with huge SQL resultset

    - by Dave McClelland
    I am working with a rather large mysql database (several million rows) with a column storing blob images. The application attempts to grab a subset of the images and runs some processing algorithms on them. The problem I'm running into is that, due to the rather large dataset that I have, the dataset that my query is returning is too large to store in memory. For the time being, I have changed the query to not return the images. While iterating over the resultset, I run another select which grabs the individual image that relates to the current record. This works, but the tens of thousands of extra queries have resulted in a performance decrease that is unacceptable. My next idea is to limit the original query to 10,000 results or so, and then keep querying over spans of 10,000 rows. This seems like the middle of the road compromise between the two approaches. I feel that there is probably a better solution that I am not aware of. Is there another way to only have portions of a gigantic resultset in memory at a time? Cheers, Dave McClelland

    Read the article

  • Wordpress SQL Select Multiple Meta Values / Meta Keys / Custom Fields

    - by Wes
    I am trying to modify a wordpress / MySQL function to display a little more information. I'm currently running the following query that selects the post, joins the 'postmeta' and gets the info where the meta_key = _liked function most_liked_posts($numberOf, $before, $after, $show_count) { global $wpdb; $request = "SELECT ID, post_title, meta_value FROM $wpdb->posts, $wpdb->postmeta"; $request .= " WHERE $wpdb->posts.ID = $wpdb->postmeta.post_id"; $request .= " AND post_status='publish' AND post_type='post' AND meta_key='_liked' "; $request .= " ORDER BY $wpdb->postmeta.meta_value+0 DESC LIMIT $numberOf"; $posts = $wpdb->get_results($request); foreach ($posts as $post) { $post_title = stripslashes($post->post_title); $permalink = get_permalink($post->ID); $post_count = $post->meta_value; echo $before.'<a href="' . $permalink . '" title="' . $post_title.'" rel="nofollow">' . $post_title . '</a>'; echo $show_count == '1' ? ' ('.$post_count.')' : ''; echo $after; } } The important part being: $post_count = $post->meta_value; But now I want to also grab a value that is attached to each post called wbphoto How do I specify that $post_count = _liked and $photo = wbphoto ? Here is a screen cap of my Phpmyadmin

    Read the article

  • Setting up an efficient and effective development process

    - by christopher-mccann
    I am in the midst of setting up the development environment (PHP/MySQL) for my start-up. We use three sets of servers: LIVE - the servers which provide the actual application TEST - providing a testing version before it is actually released DEV - the development servers The development servers run SVN with each developer checking out their local copy. At the end of each day completed fixes are checked in and then we use Hudson to automate our build process and then transfer it over to TEST. We then check the application still functions correctly using a tester and then if everything is fine move it to LIVE. I am happy with this process but I do have two questions: How would you recommend we do local testing - as each developer adds new pages or changes functionality I want them to be able to test what they are doing. Would you just setup local Apache and a local database and have them test locally on their own machine? How would you recommend dealing with data layer changes? Is there anything else you would recommend doing to really make our development process as easy and efficient as possible? Thanks in advance

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. - Apache 2.2.11 - MySQL 5.1.36 (INNODB engine) - PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios 1) IF I use a Low end PC ( low processor speed and low RAM) 2) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. 3) Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions 1) Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? 2) excessive file reading as in every 10 seconds ? 3) unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? 4) Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • How to best handle exception to repeating calendar events

    - by blcArmadillo
    I'm working on a project that will require me to implement a calendar. I'm trying to come up with a system that is very flexible: can handle repeating events, exceptions to repeats, etc. I've looked at the schema for applications like iCal, Lotus Notes, and Mozilla to get an idea of how to go about implementing such a system. Currently I'm having trouble deciding what is the best way to handle exceptions to repeating events. I've used databases quite a bit but don't have a ton of experience with really optimizing everything so I'm not sure which method of the two I'm considering would be optimal in terms of overall performance and ability to query/search: Breaking the repeating event. So taking the changing the ending date on the current row for the repeating event, inserting a new row with the exception, and adding another row continuing the old sequence. Simply adding an exception. So adding a new row with some field that indicates it as an override. So here is why I can't decide. Method one will result in a lot more rows since each edit requires 2 extra rows as apposed to only one row by the second method. On the other hand I think the query to find an event would be much simper, and thus possibly faster(?) using the first method. The second method seems like it will require more calculating on the application server since once you get the data you'll have to remove the intersection of the two rows. I know databases are often the bottleneck for websites and while I'm sure a lot of you are thinking either is fine because your project will probably never get large enough for the difference in efficiency to really matter, I'd still like to implement the best solution. So what method would you guys pick, or would you do something completely different? Also, as a side note I'll be using MySQL and PHP. If there is another technology that you think would be better suited for this, especially in the database area, please mention it. Thanks for the advice.

    Read the article

  • Using Hibernate's ScrollableResults to slowly read 90 million records

    - by at
    I simply need to read each row in a table in my MySQL database using Hibernate and write a file based on it. But there are 90 million rows and they are pretty big. So it seemed like the following would be appropriate: ScrollableResults results = session.createQuery("SELECT person FROM Person person") .setReadOnly(true).setCacheable(false).scroll(ScrollMode.FORWARD_ONLY); while (results.next()) storeInFile(results.get()[0]); The problem is the above will try and load all 90 million rows into RAM before moving on to the while loop... and that will kill my memory with OutOfMemoryError: Java heap space exceptions :(. So I guess ScrollableResults isn't what I was looking for? What is the proper way to handle this? I don't mind if this while loop takes days (well I'd love it to not). I guess the only other way to handle this is to use setFirstResult and setMaxResults to iterate through the results and just use regular Hibernate results instead of ScrollableResults. That feels like it will be inefficient though and will start taking a ridiculously long time when I'm calling setFirstResult on the 89 millionth row... UPDATE: setFirstResult/setMaxResults doesn't work, it turns out to take an unusably long time to get to the offsets like I feared. There must be a solution here! Isn't this a pretty standard procedure?? I'm willing to forgo Hibernate and use JDBC or whatever it takes. UPDATE 2: the solution I've come up with which works ok, not great, is basically of the form: select * from person where id > <offset> and <other_conditions> limit 1 Since I have other conditions, even all in an index, it's still not as fast as I'd like it to be... so still open for other suggestions..

    Read the article

  • Extra fulltext ordering criteria beyond default relevance

    - by Jeremy Warne
    I'm implementing an ingredient text search, for adding ingredients to a recipe. I've currently got a full text index on the ingredient name, which is stored in a single text field, like so: "Sauce, tomato, lite, Heinz" I've found that because there are a lot of ingredients with very similar names in the database, simply sorting by relevance doesn't work that well a lot of the time. So, I've found myself sorting by a bunch of my own rules of thumb, which probably duplicates a lot of the full-text search algorithm which spits out a numerical relevance. For instance (abridged): ORDER BY [ingredient name is exactly search term], [ingredient name starts with search term], [ingredient name starts with any word from the search and contains all search terms in some order], [ingredient name contains all search terms in some order], ...and so on. Each of these is defined in the SELECT specification as an expression returning either 1 or 0, and so I order by those in sequential order. I would love to hear suggestions for: A better way to define complicated order-by criteria in one place, say perhaps in a view or stored procedure that you can pass just the search term to and get back a set of results without having to worry about how they're ordered? A better tool for this than MySQL's fulltext engine -- perhaps if I was using Sphinx or something [which I've heard of but not used before], would I find some sort of complicated config option designed to solve problems like this? Some google search terms which might turn up discussion on how to order text items within a specific domain like this? I haven't found much that's of use. Thanks for reading!

    Read the article

  • Reading,Writing, Editing BLOBS through DataTables and DataRows

    - by Soham
    Consider this piece of code: DataSet ds = new DataSet(); SQLiteDataAdapter Da = new SQLiteDataAdapter(Command); Da.Fill(ds); DataTable dt = ds.Tables[0]; bool PositionExists; if (dt.Rows.Count > 0) { PositionExists = true; } else { PositionExists = false; } if (PositionExists) { //dt.Rows[0].Field<>("Date") } Here the "Date" field is a BLOB. My question is, a. Will reading through the DataAdapter create any problems later on, when I am working with BLOBS? More, specifically, will it read the BLOB properly? b. This was the read part.Now when I am writing the BLOB to the DB,its a queue actually. I.e I am trying to store a queue in MySQLite using a BLOB. Will Conn.ExecuteNonQuery() serve my purpose? c. When I am reading the BLOB back from the DB, can I edit it as the original datatype, it used to be in C# environment i.e { Queue - BLOB - ? } {C# -MySQL - C# } So in this context, Date field was a queue, I wrote it back as a BLOB, when reading it back, can I access it(and edit) as a queue? Thank You. Soham

    Read the article

  • [Repost-ish] Impossibly slow queries, Tables indexed, How can I speed it up?

    - by colorfulgrayscale
    Hi guys, I posted a little earlier on here at http://stackoverflow.com/questions/2656837/query-results-taking-too-long-on-200k-database-speed-up-tips asking about slow executing SQL queries. I was told to index the columns; I did. and its still slow (slow as in, i never see the results, both mysql and sqlite freeze up on query). Help would be greatly appreciated. Here is the SQL SELECT equipment.`unitID` AS `equipment_unitID`, equipment.`fleetCode` AS `equipment_fleetCode`, equipment.type AS equipment_type, equipment.tiremap AS equipment_tiremap, tiremap.`TireID` AS `tiremap_TireID`, tiremap.`WorkMap` AS `tiremap_WorkMap`, tiremap.`Position` AS `tiremap_Position`, tiremap.`DepthMap` AS `tiremap_DepthMap`, tiremap.timestamp AS tiremap_timestamp, workreference.`aMap` AS `workreference_aMap`, workreference.`bMap` AS `workreference_bMap`, tirework.`RO` AS `tirework_RO`, tirework.location AS tirework_location, tirework.mileage AS tirework_mileage, tirework.`mechanicCode` AS `tirework_mechanicCode`, tirework.`partNumber` AS `tirework_partNumber`, tirework.`historyID` AS `tirework_historyID`, tirework.workmap AS tirework_workmap, tirework.timestamp AS tirework_timestamp FROM equipment, tiremap, workreference, tirework WHERE equipment.tiremap = tiremap.`TireID` AND tiremap.`WorkMap` = workreference.`aMap` AND workreference.`bMap` = tirework.workmap LIMIT 5 and here is the EXPLAIN for it id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE equipment ALL tiremap 14079 1 SIMPLE tiremap ref PRIMARY,WorkMap,TireID,WorkMap_2 PRIMARY 52 tire.equipment.tiremap 3 1 SIMPLE workreference ref aMap,bMap aMap 52 tire.tiremap.WorkMap 1 1 SIMPLE tirework eq_ref NewIndex1 NewIndex1 52 tire.workreference.bMap 1

    Read the article

  • Converting an input text value to a decimal number

    - by vitto
    Hi, I'm trying to work with decimal data in my PHP and MySql practice and I'm not sure about how can I do for an acceptable level af accuracy. I've wrote a simple function which recives my input text value and converts it to a decimal number ready to be stored in the database. <?php function unit ($value, $decimal_point = 2) { return number_format (str_replace (",", ".", strip_tags (trim ($value))), $decimal_point); } ?> I've resolved something like AbdlBsF5%?nl with some jQuery code for replace and some regex to keep only numbers, dots and commas. In some country, people uses the comma , to send decimal numbers, so a number like 72.08 is wrote like 72,08. I'd like avoid to forcing people to change their usual chars and I've decided to use a jQuery to keep this too. Now every web developer knows the last validation must be handled by the dynamic page for security reasons. So my answer is should I use something like unit (); function to store data or shoul I also check if users inserts invalid chars like letters or something else? If I try this and send letters, the query works without save the invalid data, I think this isn't bad, but I could easily be wrong because I'm a rookie. What kind of method should I use if I want a number like 99999.99 for my query?

    Read the article

  • Enormous data and PHP errors

    - by salamis
    I am currently using the following HighCharts:HighStock:Charts: http://www.highcharts.com/stock/demo/data-grouping in order to display the data returned from the server. We retrieve the data from a MySQL database and is really big. We are storing sensor metrics every 1 second. After a while we got the following error: [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 4756882 bytes) in C:\\wamp\\www\\admin\\getTrends.php on line 156, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP Stack trace:, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 1. {main}() C:\\wamp\\www\\admin\\getTrends.php:0, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 2. getTrendsDataAI() C:\\wamp\\www\\admin\\getTrends.php:33, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 3. printResults() C:\\wamp\\www\\admin\\getTrends.php:102, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 4. createData() C:\\wamp\\www\\admin\\getTrends.php:230, referer: http://localhost/admin/trends.php [Wed Sep 12 00:15:56 2012] [error] [client 127.0.0.1] PHP 5. implode() C:\\wamp\\www\\admin\\getTrends.php:156, referer: http://localhost/admin/trends.php What is the best solution to return this data as JSON object to HighStocks for viewing? And how can we overcome the PHP limitation? Shall we return chunk of data each time? How do they usually present enormous amount of data to the users and creating charts and reports from this data? Another big problem that we need to overcome is that the returned JSON object is enormous. At this point is around 20-30 mbs and it will be much larger in the future. Is it ok to return this data to the user and perform everything client side? Any suggestions or thoughts welcome.

    Read the article

  • Ibatis startBatch() only works with SqlMapClient's own start and commit transactions, not with Sprin

    - by Brian
    Hi, I'm finding that even though I have code wrapped by Spring transactions, and it commits/rolls back when I would expect, in order to make use of JDBC batching when using Ibatis and Spring I need to use explicit SqlMapClient transaction methods. I.e. this does batching as I'd expect: dao.getSqlMapClient().startTransaction(); dao.getSqlMapClient().startBatch(); int i = 0; for (MyObject obj : allObjects) { dao.storeChange(obj); i++; if (i % DB_BATCH_SIZE == 0) { dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().startBatch(); } } dao.getSqlMapClient().executeBatch(); dao.getSqlMapClient().commitTransaction(); but if I don't have the opening and closing transaction statements, and rely on Spring to manage things (which is what I want to do!), batching just doesn't happen. Given that Spring does otherwise seem to be handling its side of the bargain regarding transaction management, can anyone advise on any known issues here? (Database is MySQL; I'm aware of the issues regarding its JDBC pseudo-batch approach with INSERT statement rewriting, that's definitely not an issue here)

    Read the article

  • Database design for sharing photos site?

    - by javaLearner.java
    I am using php and mysql. If I want to do a sharing photos website, whats the best database design for upload and display photos. This is what I have in mind: domain: |_> photos |_> user Logged in user will upload photo in [http://www.domain.com/user/upload.php] The photos are stored in filesystems, and the path-to-photos stored in database. So, in my photos folder would be like: photos/userA/subfolders+photos, photos/userB/subfolders+photos, photos/userC/subfolders+photos etc Public/others people may view his photo in: [http://www.domain.com/photos/user/?photoid=123] where 123 is the photoid, from there, I will query from database to fetch the path and display the image. My questions: Whats the best database design for photo-sharing website (like flickr)? Will there be any problems if I keep creating new folder in "photos" folder. What if hundreds of thousands users registered? Whats the best practices What size of photos should I keep? Currently I only stored thumbnail (100x100) and (max) 1600x1200 px photos. What others things I should take note when developing photos-sharing website?

    Read the article

  • Make seems to think a prerequisite is an intermediate file, removes it

    - by James
    For starters, this exercise in GNU make was admittedly just that: an exercise rather than a practicality, since a simple bash script would have sufficed. However, it brought up interesting behavior I don't quite understand. I've written a seemingly simple Makefile to handle generation of SSL key/cert pairs as necessary for MySQL. My goal was for make <name> to result in <name>-key.pem, <name>-cert.pem, and any other necessary files (specifically, the CA pair if any of it is missing or needs updating, which leads into another interesting follow-up exercise of handling reverse deps to reissue any certs that had been signed by a missing/updated CA cert). After executing all rules as expected, make seems to be too aggressive at identifying intermediate files for removal; it removes a file I thought would be "safe" since it should have been generated as a prereq to the main rule I'm invoking. (Humbly translated, I likely have misinterpreted make's documented behavior to suit my expectation, but don't understand how. ;-) Edited (thanks, Chris!) Adding %-cert.pem to .PRECIOUS does, of course, prevent the deletion. (I had been using the wrong syntax.) Makefile: OPENSSL = /usr/bin/openssl # Corrected, thanks Chris! .PHONY: clean default: ca clean: rm -I *.pem %: %-key.pem %-cert.pem @# Placeholder (to make this implicit create a rule and not cancel one) Makefile: @# Prevent the catch-all from matching Makefile ca-cert.pem: ca-key.pem $(OPENSSL) req -new -x509 -nodes -days 1000 -key ca-key.pem $@ %-key.pem: $(OPENSSL) genrsa 2048 $@ %-cert.pem: %-csr.pem ca-cert.pem ca-key.pem $(OPENSSL) x509 -req -in $ $@ Output: $ make host1 /usr/bin/openssl genrsa 2048 ca-key.pem /usr/bin/openssl req -new -x509 -nodes -days 1000 -key ca-key.pem ca-cert.pem /usr/bin/openssl genrsa 2048 host1-key.pem /usr/bin/openssl req -new -days 1000 -nodes -key host1-key.pem host1-csr.pem /usr/bin/openssl x509 -req -in host1-csr.pem -days 1000 -CA ca-cert.pem -CAkey ca-key.pem -set_serial 01 host1-cert.pem rm host1-csr.pem host1-cert.pem This is driving me crazy, and I'll happily try any suggestions and post results. If I'm just totally noobing out on this one, feel free to jibe away. You can't possibly hurt my feelings. :)

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • I am using following PHP code for trigger creation but always get error, please help me to resolve i

    - by Parth
    I am using following PHP code for trigger creation but always get error, please help me to resolve it. $link = mysql_connect('localhost','root','rainserver'); mysql_select_db('information_schema'); echo $trgquery = "DELIMITER $$ DROP TRIGGER `update_data` $$ CREATE TRIGGER `update_data` AFTER UPDATE on `jos_menu` FOR EACH ROW BEGIN IF (NEW.menutype != OLD.menutype) THEN INSERT INTO jos_menuaudit set menuid=OLD.id, oldvalue = OLD.menutype, newvalue = NEW.menutype, field = 'menutype'; END IF; IF (NEW.name != OLD.name) THEN INSERT INTO jos_menuaudit set menuid=OLD.id, oldvalue = OLD.name, newvalue = NEW.name, field = 'name'; END IF; IF (NEW.alias != OLD.alias) THEN INSERT INTO jos_menuaudit set menuid=OLD.id, oldvalue = OLD.alias, newvalue = NEW.alias, field = 'alias'; END IF; END$$ DELIMITER ;"; echo "<br>"; //$trig = mysqli_query($link,$trgquery) or die("Error Exist".mysqli_error($link)); $trig = mysql_query($trgquery) or die("Error Exist".mysql_error()); I get the error as: Error ExistYou have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '$$ DROP TRIGGER `update_data` $$ CREATE TRIGGER `update_data` AFTER UPDATE on `j' at line 1 PLease help me to create my trigger...

    Read the article

  • Optimize inserts

    - by ikerib
    Hi! I did an importer in VB .Net witch get data from an SQLServer an inserts this data throught ADSL connection in a remote MySQL server. in the first time, it was like 200 records, but now there are more than 500.000 records and it expends like 11hours exporting all the data and that is bad, veryyy bad. I need to optimize my importer, witch now gets the data into a datatable an them i have a function witch with a loop (row to row) inserts the data with a "insert into" query... like this: For Each dr As DataRow In dt.Rows Console.Write(".") Dim sql As String = "INSERT INTO clientes(id,nombrefis,nombrecom,direccion,codpos,municipio_id,telefono,fax,cif)" & _ "VALUES (@id,@nombrefis,@nombrecom,@direccion,@codpos,@municipio_id,@telefono,@fax,@cif)" cmd = New MySqlCommand(sql, cnn) cmd.Parameters.AddWithValue("id", Int32.Parse(dr("ID EMPRESA").ToString)) cmd.Parameters.AddWithValue("nombrefis", dr("NOMEMP")) cmd.Parameters.AddWithValue("nombrecom", dr("EMPRESA")) cmd.Parameters.AddWithValue("direccion", dr("DIRECC")) cmd.Parameters.AddWithValue("codpos", dr("CODPOS")) cmd.Parameters.AddWithValue("municipio_id", Int32.Parse(dr("CODIGO MUNICIPIO")).ToString) cmd.Parameters.AddWithValue("telefono", dr("TELEF")) cmd.Parameters.AddWithValue("fax", dr("FAX")) cmd.Parameters.AddWithValue("cif", dr("CIF")) cmd.ExecuteNonQuery() Next any ideas or advices? thanks so much

    Read the article

  • GD PHP Base64 Picture (png) error

    - by hogofwar
    This is part of my code: $con = mysql_connect("localhost","username","passs"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("database", $con); if(mysql_num_rows(mysql_query("SELECT name FROM xbox_user WHERE name = '$user'"))){ // Code inside if block if userid is already there $result = mysql_query("SELECT name FROM xbox_user WHERE name = '$user'"); while($row = mysql_fetch_array($result)) { if ($row['date'] > $row['date']+100){ $src = imagecreatefrompng($result['XboxInfo']['TileUrl']); $base64= base64_encode(file_get_contents($result['XboxInfo']['TileUrl'])); $date = date("Ymd"); mysql_query("UPDATE xbox_user SET date = '$date' SET avatar = '$base64' WHERE name = '$user'"); }else{ $encode = $row['avatar']; //echo $encode; $rand = rand(1, 1337); file_put_contents('/tmp/'.$rand.'.png', base64_decode($row['avatar'])); //ERROR LINE $src = imagecreatefrompng('/tmp/'.$rand.'.png'); unlink('/tmp/'.$rand.'.png'); } } }else{ $src = imagecreatefrompng($result['XboxInfo']['TileUrl']); $base64= base64_encode(file_get_contents($result['XboxInfo']['TileUrl'])); $date = date("Ymd"); mysql_query("INSERT INTO xbox_user (name, avatar, date) VALUES ('$user', '$base64', '$date')"); } It comes up with multiple errors but I feel this one should be addressed first as the other could just be caused by the first error: Warning: imagecreatefrompng() [function.imagecreatefrompng]: '/tmp/628.png' is not a valid PNG file in /home/nah/public_html/experiment/xbox/draw3.php on line 60 It also does create an entry in my mysql DB

    Read the article

  • How to call an html form back

    - by user225269
    I have this html form which will then it will call addstuds.php to execute the code for inserting records in mysql database. <form name="formcheck" method="post" action="addstuds.php"> <td width="30" height="35"><font size="3">*I D Number:</td> <td width="30"><input name="idnum" onkeypress="return isNumberKey(event)" type="text" maxlength="5" id='numbers'/></td> </tr> <tr> <td width="30" height="35"><font size="3">*Year:</td> <td width="30"><input name="yr" onkeypress="return isNumberKey(event)" type="text" maxlength="5" id='numbers'/></td> <td width="30" height="35"><font size="3">Section:</td> <td width="30"><input name="sec" type="text" maxlength="15"></td> </tr> What do I need to do so that the regstuds.php(html form) will be the one that will be seen again after the insert code in addstuds.php will finish executing?Please help

    Read the article

  • Am I correctly extracting JPEG binary data from this mysqldump?

    - by Glenn
    I have a very old .sql backup of a vbulletin site that I ran around 8 years ago. I am trying to see the file attachments that are stored in the DB. The script below extracts them all and is verified to be JPEG by hex dumping and checking the SOI (start of image) and EOI (end of image) bytes (FFD8 and FFD9, respectively) according to the JPEG wiki page. But when I try to open them with evince, I get this message "Error interpreting JPEG image file (JPEG datastream contains no image)" What could be going on here? Some background info: sqldump is around 8 years old vbulletin 2.x was the software that stored the info most likely php 4 was used most likely mysql 4.0, possibly even 3.x the column datatype these attachments are stored in is mediumtext My Python 3.1 script: #!/usr/bin/env python3.1 import re trim_l = re.compile(b"""^INSERT INTO attachment VALUES\('\d+', '\d+', '\d+', '(.+)""") trim_r = re.compile(b"""(.+)', '\d+', '\d+'\);$""") extractor = re.compile(b"""^(.*(?:\.jpe?g|\.gif|\.bmp))', '(.+)$""") with open('attachments.sql', 'rb') as fh: for line in fh: data = trim_l.findall(line)[0] data = trim_r.findall(data)[0] data = extractor.findall(data) if data: name, data = data[0] try: filename = 'files/%s' % str(name, 'UTF-8') ah = open(filename, 'wb') ah.write(data) except UnicodeDecodeError: continue finally: ah.close() fh.close() update The JPEG wiki page says FF bytes are section markers, with the next byte indicating the section type. I see some that are not listed in the wiki page (specifically, I see a lot of 5C bytes, so FF5C). But the list is of "common markers" so I'm trying to find a more complete list. Any guidance here would also be appreciated.

    Read the article

  • Crystal Report: Missing Parameter Values

    - by Chintan
    Hi! I am new to Crystal report, application is on ASP.net 3.5 and MySQL 5.1 with, going to develop report with between dates from date and to date, first page of report is shown good but when i tried to navigate on another page i got error like Missing Parameter Values Thanks in advance public partial class BookingStatement : System.Web.UI.Page { //DAL is my Data Access Layer Class //Book is ReportClass DAL obj = new DAL(); Book bkStmt = new Book(); protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { //crvBooking is Crystal Report Viewer //reportFill method is to fill Report reportFill(); crvBooking.EnableViewState = true; crvBooking.EnableParameterPrompt = false; } /* Also try reportFill() out side !IsPostBack but didn't work */ //Check if the parmeters have been shown. /* if ((ViewState["ParametersShown"] != null) && (ViewState["ParametersShown"].ToString() == "True")) { bkStmt.SetParameterValue(0, "20/04/2010"); bkStmt.SetParameterValue(1, "20/04/2010"); }*/ } protected void crvBooking_navigate(object sender, CrystalDecisions.Web.NavigateEventArgs e) { // reportFill(); } protected void reportFill() { //bkStmt.rpt is Report file //bookingstatment is View //bkStmt is ReportClass object of Book string rptPath = "bkStmt.rpt"; string query = "select * from bookingstatment"; crvBooking.RefreshReport(); crvBooking.Height = 600; crvBooking.Width = 900; bkStmt.ResourceName = rptPath; String dtFrm = bkStmt.ParameterFields[0].CurrentValues.ToString(); obj.SetCommandType(CommandType.Text); obj.CommText = query; DataTable dtst = obj.GetDataTable(); crvBooking.ParameterFieldInfo.Clear(); ParameterDiscreteValue discretevalue = new ParameterDiscreteValue(); discretevalue.Value = "20/04/2010"; // Assign parameter ParameterValues values = new ParameterValues(); values.Add(discretevalue); bkStmt.SetDataSource(dtst); ViewState["ParametersShown"] = "True"; crvBooking.EnableViewState = true; bkStmt.DataDefinition.ParameterFields[0].ApplyCurrentValues(values); bkStmt.DataDefinition.ParameterFields[1].ApplyCurrentValues(values); crvBooking.ReportSource = bkStmt; } }

    Read the article

  • Building a News-feed that comprises posts "created by user's connections" && "on the topics user is following"

    - by aklin81
    I am working on a project of Questions & Answers website that allows a user to follow questions on certain topics from his network. A user's news-feed wall comprises of only those questions that have been posted by his connections and tagged on the topics that he is following(his expertise topics). I am confused what database's datamodel would be most fitting for such an application. The project needs to consider the future provisions for scalability and high performance issues. I have been looking at Cassandra and MySQL solutions as of now. After my study of Cassandra I realized that Simple news-feed design that shows all the posts from network would be easy to design using Cassandra by executing fast writes to all followers of a user about the post from user. But for my kind of application where there is an additional filter of 'followed topics', (ie, the user receives posts "created by his network" && "on topics user is following"), I could not convince myself with a good schema design in Cassandra. I hope if I missed something because of my short understanding of cassandra, perhaps, can you please help me out with your suggestions of how this news-feed could be implemented in Cassandra ? Looking for a great project with Cassandra ! Edit: There are going to be maximum 5 tags allowed for tagging the question (ie, max 5 topics can be tagged on a question).

    Read the article

< Previous Page | 430 431 432 433 434 435 436 437 438 439 440 441  | Next Page >